FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models

09/04/2021
by   Rakesh Chada, et al.
0

The task of learning from only a few examples (called a few-shot setting) is of key importance and relevance to a real-world setting. For question answering (QA), the current state-of-the-art pre-trained models typically need fine-tuning on tens of thousands of examples to obtain good results. Their performance degrades significantly in a few-shot setting (< 100 examples). To address this, we propose a simple fine-tuning framework that leverages pre-trained text-to-text models and is directly aligned with their pre-training framework. Specifically, we construct the input as a concatenation of the question, a mask token representing the answer span and a context. Given this input, the model is fine-tuned using the same objective as that of its pre-training objective. Through experimental studies on various few-shot configurations, we show that this formulation leads to significant gains on multiple QA benchmarks (an absolute gain of 34.2 F1 points on average when there are only 16 training examples). The gains extend further when used with larger models (Eg:- 72.3 F1 on SQuAD using BART-large with only 32 examples) and translate well to a multilingual setting . On the multilingual TydiQA benchmark, our model outperforms the XLM-Roberta-large by an absolute margin of upto 40 F1 points and an average of 33 F1 points in a few-shot setting (<= 64 training examples). We conduct detailed ablation studies to analyze factors contributing to these gains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2021

CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training

With the rise of large-scale pre-trained language models, open-domain qu...
research
01/02/2021

Few-Shot Question Answering by Pretraining Span Selection

In a number of question answering (QA) benchmarks, pretrained models hav...
research
12/16/2021

Learning Rich Representation of Keyphrases from Text

In this work, we explore how to learn task-specific language models aime...
research
11/15/2022

QAmeleon: Multilingual QA with Only 5 Examples

The availability of large, high-quality datasets has been one of the mai...
research
05/23/2023

Few-shot Unified Question Answering: Tuning Models or Prompts?

Question-answering (QA) tasks often investigate specific question types,...
research
06/15/2021

Question Answering Infused Pre-training of General-Purpose Contextualized Representations

This paper proposes a pre-training objective based on question answering...
research
06/11/2022

A Dataset and Benchmark for Automatically Answering and Generating Machine Learning Final Exams

Can a machine learn machine learning? We propose to answer this question...

Please sign up or login with your details

Forgot password? Click here to reset