Evidence Sentence Extraction for Machine Reading Comprehension

02/23/2019
by   Hai Wang, et al.
0

Recently remarkable success has been achieved in machine reading comprehension (MRC). However, it is still difficult to interpret the predictions of existing MRC models. In this paper, we focus on: extracting evidence sentences that can explain/support answer predictions for multiple-choice MRC tasks, where the majority of answer options cannot be directly extracted from reference documents; studying the impacts of using the extracted sentences as the input of MRC models. Due to the lack of ground truth evidence sentence labels in most cases, we apply distant supervision to generate imperfect labels and then use them to train a neural evidence extractor. To denoise the noisy labels, we treat labels as latent variables and define priors over these latent variables by incorporating rich linguistic knowledge under a recently proposed deep probabilistic logic learning framework. We feed the extracted evidence sentences into existing MRC models and evaluate the end-to-end performance on three challenging multiple-choice MRC datasets: MultiRC, DREAM, and RACE, achieving comparable or better performance than the same models that take the full context as input. Our evidence extractor also outperforms a state-of-the-art sentence selector by a large margin on two open-domain question answering datasets: Quasar-T and SearchQA. To the best of our knowledge, this is the first work addressing evidence sentence extraction for multiple-choice MRC.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2020

A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction

Neural models have achieved great success on machine reading comprehensi...
research
05/09/2017

TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension

We present TriviaQA, a challenging reading comprehension dataset contain...
research
10/06/2022

U3E: Unsupervised and Erasure-based Evidence Extraction for Machine Reading Comprehension

More tasks in Machine Reading Comprehension(MRC) require, in addition to...
research
05/21/2019

Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction

Question answering (QA) using textual sources such as reading comprehens...
research
11/06/2016

Hierarchical Question Answering for Long Documents

We present a framework for question answering that can efficiently scale...
research
05/23/2019

Multi-hop Reading Comprehension via Deep Reinforcement Learning based Document Traversal

Reading Comprehension has received significant attention in recent years...
research
10/30/2016

Represent, Aggregate, and Constrain: A Novel Architecture for Machine Reading from Noisy Sources

In order to extract event information from text, a machine reading model...

Please sign up or login with your details

Forgot password? Click here to reset