Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets

08/06/2020
by   Patrick Lewis, et al.
14

Ideally Open-Domain Question Answering models should exhibit a number of competencies, ranging from simply memorizing questions seen at training time, to answering novel question formulations with answers seen during training, to generalizing to completely novel questions with novel answers. However, single aggregated test set scores do not show the full picture of what capabilities models truly have. In this work, we perform a detailed study of the test sets of three popular open-domain benchmark datasets with respect to these competencies. We find that 60-70 somewhere in the training sets. We also find that 30 have a near-duplicate paraphrase in their corresponding training sets. Using these findings, we evaluate a variety of popular open-domain models to obtain greater insight into what extent they can actually generalize, and what drives their overall performance. We find that all models perform dramatically worse on questions that cannot be memorized from training sets, with a mean absolute performance difference of 63 we show that simple nearest-neighbor models out-perform a BART closed-book QA model, further highlighting the role that training set memorization plays in these benchmarks

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2021

Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study

Recent advancements in open-domain question answering (ODQA), i.e., find...
research
09/02/2021

Challenges in Generalization in Open Domain Question Answering

Recent work on Open Domain Question Answering has shown that there is a ...
research
05/02/2020

AVA: an Automatic eValuation Approach to Question Answering Systems

We introduce AVA, an automatic evaluation approach for Question Answerin...
research
10/31/2022

Query Refinement Prompts for Closed-Book Long-Form Question Answering

Large language models (LLMs) have been shown to perform well in answerin...
research
05/02/2020

ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning

Given questions regarding some prototypical situation – such as Name som...
research
05/23/2023

Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited Questions

Contrast consistency, the ability of a model to make consistently correc...
research
05/26/2023

RFiD: Towards Rational Fusion-in-Decoder for Open-Domain Question Answering

Open-Domain Question Answering (ODQA) systems necessitate a reader model...

Please sign up or login with your details

Forgot password? Click here to reset