Variational Question-Answer Pair Generation for Machine Reading Comprehension

04/07/2020
by   Kazutoshi Shinoda, et al.
0

We present a deep generative model of question-answer (QA) pairs for machine reading comprehension. We introduce two independent latent random variables into our model in order to diversify answers and questions separately. We also study the effect of explicitly controlling the KL term in the variational lower bound in order to avoid the "posterior collapse" issue, where the model ignores latent variables and generates QA pairs that are almost the same. Our experiments on SQuAD v1.1 showed that variational methods can aid QA pair modeling capacity, and that the controlled KL term can significantly improve diversity while generating high-quality questions and answers comparable to those of the existing systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/24/2021

OneStop QAMaker: Extract Question-Answer Pairs from Text in a One-Stop Approach

Large-scale question-answer (QA) pairs are critical for advancing resear...
11/27/2019

Label Dependent Deep Variational Paraphrase Generation

Generating paraphrases that are lexically similar but semantically diffe...
12/31/2020

Using Natural Language Relations between Answer Choices for Machine Comprehension

When evaluating an answer choice for Reading Comprehension task, other a...
06/16/2022

GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions

Recent machine reading comprehension datasets include extractive and boo...
04/18/2021

Learning with Instance Bundles for Reading Comprehension

When training most modern reading comprehension models, all the question...
06/06/2019

Generating Question-Answer Hierarchies

The process of knowledge acquisition can be viewed as a question-answer ...
10/20/2020

Bi-directional Cognitive Thinking Network for Machine Reading Comprehension

We propose a novel Bi-directional Cognitive Knowledge Framework (BCKF) f...