A Joint Model for Question Answering and Question Generation

06/05/2017
by   Tong Wang, et al.
0

We propose a generative machine comprehension model that learns jointly to ask and answer questions based on documents. The proposed model uses a sequence-to-sequence framework that encodes the document and generates a question (answer) given an answer (question). Significant improvement in model performance is observed empirically on the SQuAD corpus, confirming our hypothesis that the model benefits from jointly learning to perform both tasks. We believe the joint model's novelty offers a new perspective on machine comprehension beyond architectural engineering, and serves as a first step towards autonomous information seeking.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2018

Dual Ask-Answer Network for Machine Reading Comprehension

There are three modalities in the reading comprehension setting: questio...
research
12/31/2020

Using Natural Language Relations between Answer Choices for Machine Comprehension

When evaluating an answer choice for Reading Comprehension task, other a...
research
10/22/2019

Capturing Greater Context for Question Generation

Automatic question generation can benefit many applications ranging from...
research
01/27/2021

VisualMRC: Machine Reading Comprehension on Document Images

Recent studies on machine reading comprehension have focused on text-lev...
research
11/16/2017

An Abstractive approach to Question Answering

Question Answering has come a long way from answer sentence selection, r...
research
04/20/2021

Towards Solving Multimodal Comprehension

This paper targets the problem of procedural multimodal machine comprehe...
research
02/13/2016

Science Question Answering using Instructional Materials

We provide a solution for elementary science test using instructional ma...

Please sign up or login with your details

Forgot password? Click here to reset