Machine Comprehension by Text-to-Text Neural Question Generation

05/04/2017
by   Xingdi Yuan, et al.
0

We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system. We motivate question generation as a means to improve the performance of question answering systems. Our model is trained and evaluated on the recent question-answering dataset SQuAD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2017

Ask the Right Questions: Active Question Reformulation with Reinforcement Learning

We frame Question Answering as a Reinforcement Learning task, an approac...
research
12/18/2020

Exploring Fluent Query Reformulations with Text-to-Text Transformers and Reinforcement Learning

Query reformulation aims to alter potentially noisy or ambiguous text se...
research
10/23/2019

BanditRank: Learning to Rank Using Contextual Bandits

We propose an extensible deep learning method that uses reinforcement le...
research
10/05/2022

Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model

Explainable question answering systems should produce not only accurate ...
research
10/31/2017

DCN+: Mixed Objective and Deep Residual Coattention for Question Answering

Traditional models for question answering optimize using cross entropy l...
research
04/23/2017

Learning to Skim Text

Recurrent Neural Networks are showing much promise in many sub-areas of ...

Please sign up or login with your details

Forgot password? Click here to reset