Generating Distractors for Reading Comprehension Questions from Real Examinations

09/08/2018
by   Yifan Gao, et al.
0

We investigate the task of distractor generation for multiple choice reading comprehension questions from examinations. In contrast to all previous works, we do not aim at preparing words or short phrases distractors, instead, we endeavor to generate longer and semantic-rich distractors which are closer to distractors in real reading comprehension from examinations. Taking a reading comprehension article, a pair of question and its correct option as input, our goal is to generate several distractors which are somehow related to the answer, consistent with the semantic context of the question and have some trace in the article. We propose a hierarchical encoder-decoder framework with static and dynamic attention mechanisms to tackle this task. Specifically, the dynamic attention can combine sentence-level and word-level attention varying at each recurrent time step to generate a more readable sequence. The static attention is to modulate the dynamic attention not to focus on question irrelevant sentences or sentences which contribute to the correct option. Our proposed framework outperforms several strong baselines on the first prepared distractor generation dataset of real reading comprehension questions. For human evaluation, compared with those distractors generated by baselines, our generated distractors are more functional to confuse the annotators.

READ FULL TEXT
research
03/03/2020

GenNet : Reading Comprehension with Multiple Choice Questions using Generation and Selection model

Multiple-choice machine reading comprehension is difficult task as its r...
research
11/20/2019

Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension

In reading comprehension, generating sentence-level distractors is a sig...
research
04/29/2017

Learning to Ask: Neural Question Generation for Reading Comprehension

We study automatic question generation for sentences from text passages ...
research
08/19/2016

Who did What: A Large-Scale Person-Centered Cloze Dataset

We have constructed a new "Who-did-What" dataset of over 200,000 fill-in...
research
04/15/2019

Improving Human Text Comprehension through Semi-Markov CRF-based Neural Section Title Generation

Titles of short sections within long documents support readers by guidin...
research
03/18/2021

Quinductor: a multilingual data-driven method for generating reading-comprehension questions using Universal Dependencies

We propose a multilingual data-driven method for generating reading comp...
research
07/13/2021

Deep Neural Networks Evolve Human-like Attention Distribution during Reading Comprehension

Attention is a key mechanism for information selection in both biologica...

Please sign up or login with your details

Forgot password? Click here to reset