KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering

05/06/2022
by   Jianing Wang, et al.
0

Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs). However, most existing approaches for MRC may perform poorly in the few-shot learning scenario. To solve this issue, we propose a novel framework named Knowledge Enhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads to PLMs, we introduce a seminal paradigm for EQA that transform the task into a non-autoregressive Masked Language Modeling (MLM) generation problem. Simultaneously, rich semantics from the external knowledge base (KB) and the passage context are support for enhancing the representations of the query. In addition, to boost the performance of PLMs, we jointly train the model by the MLM and contrastive learning objectives. Experiments on multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art approaches in few-shot settings by a large margin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2019

How to Pre-Train Your Model? Comparison of Different Pre-Training Models for Biomedical Question Answering

Using deep learning models on small scale datasets would result in overf...
research
02/26/2023

Cross-Lingual Question Answering over Knowledge Base as Reading Comprehension

Although many large-scale knowledge bases (KBs) claim to contain multili...
research
12/20/2022

In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models

Given the success with in-context learning of large pre-trained language...
research
05/02/2023

Discern and Answer: Mitigating the Impact of Misinformation in Retrieval-Augmented Models with Discriminators

Most existing retrieval-augmented language models (LMs) for question ans...
research
04/01/2022

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

Pre-trained Language Models (PLMs) have achieved remarkable performance ...
research
10/19/2022

CPL: Counterfactual Prompt Learning for Vision and Language Models

Prompt tuning is a new few-shot transfer learning technique that only tu...
research
11/27/2019

Evaluating Commonsense in Pre-trained Language Models

Contextualized representations trained over large raw text data have giv...

Please sign up or login with your details

Forgot password? Click here to reset