SRQA: Synthetic Reader for Factoid Question Answering

09/02/2020
by   Jiuniu Wang, et al.
0

The question answering system can answer questions from various fields and forms with deep neural networks, but it still lacks effective ways when facing multiple evidences. We introduce a new model called SRQA, which means Synthetic Reader for Factoid Question Answering. This model enhances the question answering system in the multi-document scenario from three aspects: model structure, optimization goal, and training method, corresponding to Multilayer Attention (MA), Cross Evidence (CE), and Adversarial Training (AT) respectively. First, we propose a multilayer attention network to obtain a better representation of the evidences. The multilayer attention mechanism conducts interaction between the question and the passage within each layer, making the token representation of evidences in each layer takes the requirement of the question into account. Second, we design a cross evidence strategy to choose the answer span within more evidences. We improve the optimization goal, considering all the answers' locations in multiple evidences as training targets, which leads the model to reason among multiple evidences. Third, adversarial training is employed to high-level variables besides the word embedding in our model. A new normalization method is also proposed for adversarial perturbations so that we can jointly add perturbations to several target variables. As an effective regularization method, adversarial training enhances the model's ability to process noisy data. Combining these three strategies, we enhance the contextual representation and locating ability of our model, which could synthetically extract the answer span from several evidences. We perform SRQA on the WebQA dataset, and experiments show that our model outperforms the state-of-the-art models (the best fuzzy score of our model is up to 78.56

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/03/2018

A3Net: Adversarial-and-Attention Network for Machine Reading Comprehension

In this paper, we introduce Adversarial-and-attention Network (A3Net) fo...
research
06/21/2017

Cross-language Learning with Adversarial Neural Networks: Application to Community Question Answering

We address the problem of cross-language adaptation for question-questio...
research
09/17/2019

Inverse Visual Question Answering with Multi-Level Attentions

In this paper, we propose a novel deep multi-level attention model to ad...
research
10/18/2018

Adversarial TableQA: Attention Supervision for Question Answering on Tables

The task of answering a question given a text passage has shown great de...
research
04/30/2020

RikiNet: Reading Wikipedia Pages for Natural Question Answering

Reading long documents to answer open-domain questions remains challengi...
research
02/05/2021

Model Agnostic Answer Reranking System for Adversarial Question Answering

While numerous methods have been proposed as defenses against adversaria...
research
12/05/2018

Are you tough enough? Framework for Robustness Validation of Machine Comprehension Systems

Deep Learning NLP domain lacks procedures for the analysis of model robu...

Please sign up or login with your details

Forgot password? Click here to reset