Cooperative Learning of Zero-Shot Machine Reading Comprehension

03/12/2021
by   Hongyin Luo, et al.
0

Pretrained language models have significantly improved the performance of down-stream language understanding tasks, including extractive question answering, by providing high-quality contextualized word embeddings. However, learning question answering models still need large-scaled data annotation in specific domains. In this work, we propose a cooperative, self-play learning framework, REGEX, for question generation and answering. REGEX is built upon a masked answer extraction task with an interactive learning environment containing an answer entity REcognizer, a question Generator, and an answer EXtractor. Given a passage with a masked entity, the generator generates a question around the entity, and the extractor is trained to extract the masked entity with the generated question and raw texts. The framework allows the training of question generation and answering models on any text corpora without annotation. We further leverage a reinforcement learning technique to reward generating high-quality questions and to improve the answer extraction model's performance. Experiment results show that REGEX outperforms the state-of-the-art (SOTA) pretrained language models and zero-shot approaches on standard question-answering benchmarks, and yields the new SOTA performance under the zero-shot setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2022

Zero-Shot Video Question Answering via Frozen Bidirectional Language Models

Video question answering (VideoQA) is a complex task that requires diver...
research
08/12/2021

How Optimal is Greedy Decoding for Extractive Question Answering?

Fine-tuned language models use greedy decoding to answer reading compreh...
research
08/21/2023

DocPrompt: Large-scale continue pretrain for zero-shot and few-shot document question answering

In this paper, we propose Docprompt for document question answering task...
research
09/06/2021

General-Purpose Question-Answering with Macaw

Despite the successes of pretrained language models, there are still few...
research
05/16/2022

Heroes, Villains, and Victims, and GPT-3: Automated Extraction of Character Roles Without Training Data

This paper shows how to use large-scale pre-trained language models to e...
research
04/07/2023

Language Models are Causal Knowledge Extractors for Zero-shot Video Question Answering

Causal Video Question Answering (CVidQA) queries not only association or...
research
11/05/2020

Context-Aware Answer Extraction in Question Answering

Extractive QA models have shown very promising performance in predicting...

Please sign up or login with your details

Forgot password? Click here to reset