Machine Comprehension by Text-to-Text Neural Question Generation

by   Xingdi Yuan, et al.

We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system. We motivate question generation as a means to improve the performance of question answering systems. Our model is trained and evaluated on the recent question-answering dataset SQuAD.


page 1

page 2

page 3

page 4


Ask the Right Questions: Active Question Reformulation with Reinforcement Learning

We frame Question Answering as a Reinforcement Learning task, an approac...

Exploring Fluent Query Reformulations with Text-to-Text Transformers and Reinforcement Learning

Query reformulation aims to alter potentially noisy or ambiguous text se...

BanditRank: Learning to Rank Using Contextual Bandits

We propose an extensible deep learning method that uses reinforcement le...

DCN+: Mixed Objective and Deep Residual Coattention for Question Answering

Traditional models for question answering optimize using cross entropy l...

Learning to Skim Text

Recurrent Neural Networks are showing much promise in many sub-areas of ...

Please sign up or login with your details

Forgot password? Click here to reset