QAmeleon: Multilingual QA with Only 5 Examples

11/15/2022
by   Priyanka Agrawal, et al.
0

The availability of large, high-quality datasets has been one of the main drivers of recent progress in question answering (QA). Such annotated datasets however are difficult and costly to collect, and rarely exist in languages other than English, rendering QA technology inaccessible to underrepresented languages. An alternative to building large monolingual training datasets is to leverage pre-trained language models (PLMs) under a few-shot learning setting. Our approach, QAmeleon, uses a PLM to automatically generate multilingual data upon which QA models are trained, thus avoiding costly annotation. Prompt tuning the PLM for data synthesis with only five examples per language delivers accuracy superior to translation-based baselines, bridges nearly 60 between an English-only baseline and a fully supervised upper bound trained on almost 50,000 hand labeled examples, and always leads to substantial improvements compared to fine-tuning a QA model directly on labeled examples in low resource settings. Experiments on the TyDiQA-GoldP and MLQA benchmarks show that few-shot prompt tuning for data synthesis scales across languages and is a viable alternative to large-scale annotation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2022

MuCoT: Multilingual Contrastive Training for Question-Answering in Low-resource Languages

Accuracy of English-language Question Answering (QA) systems has improve...
research
10/16/2019

MLQA: Evaluating Cross-lingual Extractive Question Answering

Question answering (QA) models have shown rapid progress enabled by the ...
research
12/10/2020

Multilingual Transfer Learning for QA Using Translation as Data Augmentation

Prior work on multilingual question answering has mostly focused on usin...
research
05/23/2023

Few-shot Unified Question Answering: Tuning Models or Prompts?

Question-answering (QA) tasks often investigate specific question types,...
research
09/04/2021

FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models

The task of learning from only a few examples (called a few-shot setting...
research
08/27/2022

MDIA: A Benchmark for Multilingual Dialogue Generation in 46 Languages

Owing to the lack of corpora for low-resource languages, current works o...
research
11/14/2022

Learning to Answer Multilingual and Code-Mixed Questions

Question-answering (QA) that comes naturally to humans is a critical com...

Please sign up or login with your details

Forgot password? Click here to reset