Skip to content

LittleSummer114/PaperNote

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Common-Sense-paper-List

A summary of must-read papers about Common Sense

1. Survey
2. Models
2.1 Basic Seq2Seq Models 2.2 Encoding Answers
2.3 Linguistic Features 2.4 Question-specific Rewards
2.5 Content Selection 2.6 Question Type Modeling
2.7 Encode wider contexts 2.8 Other Directions
2. Applications
2.1 Difficulty Controllable QG 2.2 Conversational QG
2.3 Asking special questions 2.4 Answer-unaware QG
2.5 Unanswerable QG 2.6 Combining QA and QG
2.7 QG from knowledge graphs 2.8 Visual Question Generation
2.9 Distractor Generation
3. Evaluation
4. Resources
  1. Recent Advances in Neural Question Generation. arxiv, 2018. paper

    Liangming Pan, Wenqiang Lei, Tat-Seng Chua, Min-Yen Kan.

Basic Seq2Seq models with attention to generate questions.

  1. Learning to ask: Neural question generation for reading comprehension. ACL, 2017. paper

    Xinya Du, Junru Shao, Claire Cardie.

Applying various techniques to encode the answer information thus allowing for better quality answer-focused questions.

  1. Answer-focused and Position-aware Neural Question Generation. EMNLP, 2018. paper

    Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, Shi Wang

    Xiyao Ma, Qile Zhu, Yanlin Zhou, Xiaolin Li, Dapeng Wu

Improve QG by incorporating various linguistic features into the QG process.

  1. Neural Generation of Diverse Questions using Answer Focus, Contextual and Linguistic Features. INLG, 2018. paper

    Vrindavan Harrison, Marilyn Walker

Improving the training via combining supervised and reinforcement learning to maximize question-specific rewards

  1. Teaching Machines to Ask Questions. IJCAI, 2018. paper

    Kaichun Yao, Libo Zhang, Tiejian Luo, Lili Tao, Yanjun Wu

Improve QG by considering how to select question-worthy contents (content selection) before asking a question.

  1. Identifying Where to Focus in Reading Comprehension for Neural Question Generation. EMNLP, 2017. paper

    Xinya Du, Claire Cardie

Improve QG by explicitly modeling question types or interrogative words.

  1. Question Generation for Question Answering. EMNLP,2017. paper

    Nan Duan, Duyu Tang, Peng Chen, Ming Zhou

Improve QG by incorporating wider contexts in the input passage.

  1. Harvesting paragraph-level question-answer pairs from wikipedia. ACL, 2018. paper code&dataset

    Xinya Du, Claire Cardie

  1. Cross-Lingual Training for Automatic Question Generation. ACL, 2019. paper dataset

    Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan, Preethi Jyothi

Endowing the model with the ability to control the difficulty of the generated questions.

  1. Easy-to-Hard: Leveraging Simple Questions for Complex Question Generation. arxiv, 2019. paper

    Jie Zhao, Xiang Deng, Huan Sun.

Learning to generate a series of coherent questions grounded in a question answering style conversation.

  1. Learning to Ask Questions in Open-domain Conversational Systems with Typed Decoders. ACL, 2018. paper code dataset

    Yansen Wang, Chenyi Liu, Minlie Huang, Liqiang Nie

This direction focuses on exploring how to ask special types of questions, such as mathematical questions, open-ended questions, non-factoid questions, and clarification questions.

  1. Are You Asking the Right Questions? Teaching Machines to Ask Clarification Questions. ACL Workshop, 2017. paper

    Sudha Rao

In answer-unaware QG, the model does not require the target answer as an input to serve as the focus of asking. Therefore, the model should automatically identify question-worthy parts within the passage to ask.

  1. Learning to ask: Neural question generation for reading comprehension. ACL, 2017. paper

    Xinya Du, Junru Shao, Claire Cardie.

Learning to generate questions that cannot be answered by the input passage.

  1. Learning to Ask Unanswerable Questions for Machine Reading Comprehension. ACL, 2019. paper

    Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu

This direction investigate how to combine the task of QA and QG by multi-task learning or joint training.

  1. Question Generation for Question Answering. EMNLP,2017. paper

    Nan Duan, Duyu Tang, Peng Chen, Ming Zhou

This direction is about generating questions from a knowledge graph.

  1. Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus. ACL, 2016. paper dataset

    Iulian Vlad Serban, Alberto García-Durán, Çaglar Gülçehre, Sungjin Ahn, Sarath Chandar, Aaron C. Courville, Yoshua Bengio

study common sense based on visual inputs (usually an image).

  1. Generating Natural Questions About an Image ACL, 2016. paper

    Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, Lucy Vanderwende

This direction investigates the mechanism behind question asking, and how to evaluate the quality of generated questions.

  1. Question Asking as Program Generation. NeurIPS, 2017. paper

    Anselm Rothe, Brenden M. Lake, Todd M. Gureckis.

QG-specific datasets and toolkits.

  1. LearningQ: A Large-Scale Dataset for Educational Question Generation. ICWSM, 2018. paper

    Guanliang Chen, Jie Yang, Claudia Hauff, Geert-Jan Houben.

About

a collection of my idea

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages