Bridging the Gap between Language Model and Reading Comprehension: Unsupervised MRC via Self-Supervision

07/19/2021
by   Ning Bian, et al.
0

Despite recent success in machine reading comprehension (MRC), learning high-quality MRC models still requires large-scale labeled training data, even using strong pre-trained language models (PLMs). The pre-training tasks for PLMs are not question-answering or MRC-based tasks, making existing PLMs unable to be directly used for unsupervised MRC. Specifically, MRC aims to spot an accurate answer span from the given document, but PLMs focus on token filling in sentences. In this paper, we propose a new framework for unsupervised MRC. Firstly, we propose to learn to spot answer spans in documents via self-supervised learning, by designing a self-supervision pretext task for MRC - Spotting-MLM. Solving this task requires capturing deep interactions between sentences in documents. Secondly, we apply a simple sentence rewriting strategy in the inference stage to alleviate the expression mismatch between questions and documents. Experiments show that our method achieves a new state-of-the-art performance for unsupervised MRC.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2021

Self-Supervised Test-Time Learning for Reading Comprehension

Recent work on unsupervised question answering has shown that models can...
research
05/10/2021

REPT: Bridging Language Models and Machine Reading Comprehension via Retrieval-Based Pre-training

Pre-trained Language Models (PLMs) have achieved great success on Machin...
research
09/09/2019

Span Selection Pre-training for Question Answering

BERT (Bidirectional Encoder Representations from Transformers) and relat...
research
06/23/2021

PALRACE: Reading Comprehension Dataset with Human Data and Labeled Rationales

Pre-trained language models achieves high performance on machine reading...
research
07/25/2018

Repartitioning of the ComplexWebQuestions Dataset

Recently, Talmor and Berant (2018) introduced ComplexWebQuestions - a da...
research
12/09/2022

From Clozing to Comprehending: Retrofitting Pre-trained Language Model to Pre-trained Machine Reader

We present Pre-trained Machine Reader (PMR), a novel method to retrofit ...
research
05/23/2019

Multi-hop Reading Comprehension via Deep Reinforcement Learning based Document Traversal

Reading Comprehension has received significant attention in recent years...

Please sign up or login with your details

Forgot password? Click here to reset