Momentum Contrastive Pre-training for Question Answering

12/12/2022
by   Minda Hu, et al.
0

Existing pre-training methods for extractive Question Answering (QA) generate cloze-like queries different from natural questions in syntax structure, which could overfit pre-trained models to simple keyword matching. In order to address this problem, we propose a novel Momentum Contrastive pRe-training fOr queStion anSwering (MCROSS) method for extractive QA. Specifically, MCROSS introduces a momentum contrastive learning framework to align the answer probability between cloze-like and natural query-passage sample pairs. Hence, the pre-trained models can better transfer the knowledge learned in cloze-like samples to answering natural questions. Experimental results on three benchmarking QA datasets show that our method achieves noticeable improvement compared with all baselines in both supervised and zero-shot scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2021

CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training

With the rise of large-scale pre-trained language models, open-domain qu...
research
06/15/2021

Question Answering Infused Pre-training of General-Purpose Contextualized Representations

This paper proposes a pre-training objective based on question answering...
research
12/15/2022

Edema Estimation From Facial Images Taken Before and After Dialysis via Contrastive Multi-Patient Pre-Training

Edema is a common symptom of kidney disease, and quantitative measuremen...
research
09/17/2020

Self-supervised pre-training and contrastive representation learning for multiple-choice video QA

Video Question Answering (Video QA) requires fine-grained understanding ...
research
05/14/2023

Distinguish Before Answer: Generating Contrastive Explanation as Knowledge for Commonsense Question Answering

Existing knowledge-enhanced methods have achieved remarkable results in ...
research
03/02/2023

Matching-based Term Semantics Pre-training for Spoken Patient Query Understanding

Medical Slot Filling (MSF) task aims to convert medical queries into str...
research
05/01/2020

Self-supervised Knowledge Triplet Learning for Zero-shot Question Answering

The aim of all Question Answering (QA) systems is to be able to generali...

Please sign up or login with your details

Forgot password? Click here to reset