Question Answering Infused Pre-training of General-Purpose Contextualized Representations

06/15/2021
by   Robin Jia, et al.
0

This paper proposes a pre-training objective based on question answering (QA) for learning general-purpose contextual representations, motivated by the intuition that the representation of a phrase in a passage should encode all questions that the phrase can answer in context. We accomplish this goal by training a bi-encoder QA model, which independently encodes passages and questions, to match the predictions of a more accurate cross-encoder model on 80 million synthesized QA pairs. By encoding QA-relevant information, the bi-encoder's token-level representations are useful for non-QA downstream tasks without extensive (or in some cases, any) fine-tuning. We show large improvements over both RoBERTa-large and previous state-of-the-art results on zero-shot and few-shot paraphrase detection on four datasets, few-shot named entity recognition on two datasets, and zero-shot sentiment analysis on three datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/14/2021

CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training

With the rise of large-scale pre-trained language models, open-domain qu...
05/12/2021

Encoding Explanatory Knowledge for Zero-shot Science Question Answering

This paper describes N-XKT (Neural encoding based on eXplanatory Knowled...
05/25/2022

Intermediate Training on Question Answering Datasets Improves Generative Data Augmentation

Manually annotating datasets requires domain experts to read through man...
09/27/2020

Unsupervised Pre-training for Biomedical Question Answering

We explore the suitability of unsupervised representation learning metho...
05/01/2020

Self-supervised Knowledge Triplet Learning for Zero-shot Question Answering

The aim of all Question Answering (QA) systems is to be able to generali...
12/04/2021

A Russian Jeopardy! Data Set for Question-Answering Systems

Question answering (QA) is one of the most common NLP tasks that relates...
09/06/2021

General-Purpose Question-Answering with Macaw

Despite the successes of pretrained language models, there are still few...