Variational Question-Answer Pair Generation for Machine Reading Comprehension

by   Kazutoshi Shinoda, et al.

We present a deep generative model of question-answer (QA) pairs for machine reading comprehension. We introduce two independent latent random variables into our model in order to diversify answers and questions separately. We also study the effect of explicitly controlling the KL term in the variational lower bound in order to avoid the "posterior collapse" issue, where the model ignores latent variables and generates QA pairs that are almost the same. Our experiments on SQuAD v1.1 showed that variational methods can aid QA pair modeling capacity, and that the controlled KL term can significantly improve diversity while generating high-quality questions and answers comparable to those of the existing systems.


page 1

page 2

page 3

page 4


OneStop QAMaker: Extract Question-Answer Pairs from Text in a One-Stop Approach

Large-scale question-answer (QA) pairs are critical for advancing resear...

Label Dependent Deep Variational Paraphrase Generation

Generating paraphrases that are lexically similar but semantically diffe...

Robust Domain Adaptation for Machine Reading Comprehension

Most domain adaptation methods for machine reading comprehension (MRC) u...

Using Natural Language Relations between Answer Choices for Machine Comprehension

When evaluating an answer choice for Reading Comprehension task, other a...

Learning with Instance Bundles for Reading Comprehension

When training most modern reading comprehension models, all the question...

GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions

Recent machine reading comprehension datasets include extractive and boo...

Please sign up or login with your details

Forgot password? Click here to reset