Improving Machine Reading Comprehension with General Reading Strategies

10/31/2018
by   Kai Sun, et al.
0

Reading strategies have been shown to improve comprehension levels, especially for readers lacking adequate prior knowledge. Just as the process of knowledge accumulation is time-consuming for human readers, it is resource-demanding to impart rich general domain knowledge into a language model via pre-training (Radford et al., 2018; Devlin et al., 2018). Inspired by reading strategies identified in cognitive science, and given limited computational resources - just a pre-trained model and a fixed number of training instances - we therefore propose three simple domain-independent strategies aimed to improve non-extractive machine reading comprehension (MRC): (i) BACK AND FORTH READING that considers both the original and reverse order of an input sequence, (ii) HIGHLIGHTING, which adds a trainable embedding to the text embedding of tokens that are relevant to the question and candidate answers, and (iii) SELF-ASSESSMENT that generates practice questions and candidate answers directly from the text in an unsupervised manner. By fine-tuning a pre-trained language model (Radford et al., 2018) with our proposed strategies on the largest existing general domain multiple-choice MRC dataset RACE, we obtain a 5.8 best result achieved by the same pre-trained model fine-tuned on RACE without the use of strategies. We further fine-tune the resulting model on a target task, leading to new state-of-the-art results on six representative non-extractive MRC datasets from different domains (i.e., ARC, OpenBookQA, MCTest, MultiRC, SemEval-2018, and ROCStories). These results indicate the effectiveness of the proposed strategies and the versatility and general applicability of our fine-tuned models that incorporate these strategies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2019

Probing Prior Knowledge Needed in Challenging Chinese Machine Reading Comprehension

With an ultimate goal of narrowing the gap between human and machine rea...
research
08/05/2019

Beyond English-only Reading Comprehension: Experiments in Zero-Shot Multilingual Transfer for Bulgarian

Recently, reading comprehension models achieved near-human performance o...
research
10/15/2021

Tracing Origins: Coref-aware Machine Reading Comprehension

Machine reading comprehension is a heavily-studied research and test fie...
research
01/27/2020

Retrospective Reader for Machine Reading Comprehension

Machine reading comprehension (MRC) is an AI challenge that requires mac...
research
01/26/2020

Dual Multi-head Co-attention for Multi-choice Reading Comprehension

Multi-choice Machine Reading Comprehension (MRC) requires model to decid...
research
11/14/2017

Towards Human-level Machine Reading Comprehension: Reasoning and Inference with Multiple Strategies

This paper presents a new MRC model that is capable of three key compreh...
research
04/02/2019

Understanding language-elicited EEG data by predicting it from a fine-tuned language model

Electroencephalography (EEG) recordings of brain activity taken while pa...

Please sign up or login with your details

Forgot password? Click here to reset