Globally Normalized Reader

09/08/2017
by   Jonathan Raiman, et al.
0

Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bi-directional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer's sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2017

Learning to Paraphrase for Question Answering

Question answering (QA) systems are sensitive to the many different ways...
research
07/21/2016

Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering

While question answering (QA) with neural network, i.e. neural QA, has a...
research
05/02/2019

Conditioning LSTM Decoder and Bi-directional Attention Based Question Answering System

Applying neural-networks on Question Answering has gained increasing pop...
research
06/03/2016

Question Answering over Knowledge Base with Neural Attention Combining Global Knowledge Information

With the rapid growth of knowledge bases (KBs) on the web, how to take f...
research
11/10/2022

DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering

Question answering models commonly have access to two sources of "knowle...
research
08/23/2022

Unsupervised Question Answering via Answer Diversifying

Unsupervised question answering is an attractive task due to its indepen...

Please sign up or login with your details

Forgot password? Click here to reset