Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension

02/02/2020
by   Max Bartolo, et al.
0

Innovations in annotation methodology have been a propellant for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation approach and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalisation to data collected without a model. We find that training on adversarially collected samples leads to strong generalisation to non-adversarially collected datasets, yet with progressive deterioration as the model-in-the-loop strength increases. Furthermore we find that stronger models can still learn from datasets collected with substantially weaker models in the loop: When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 36.0F1 on questions that it cannot answer when trained on SQuAD - only marginally lower than when trained on data collected using RoBERTa itself.

READ FULL TEXT

page 15

page 17

page 19

research
04/30/2020

STARC: Structured Annotations for Reading Comprehension

We present STARC (Structured Annotations for Reading Comprehension), a n...
research
10/24/2022

Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models

Cognitive psychologists have documented that humans use cognitive heuris...
research
06/28/2022

Collecting high-quality adversarial data for machine reading comprehension tasks with humans and models in the loop

We present our experience as annotators in the creation of high-quality,...
research
06/01/2023

How Many Answers Should I Give? An Empirical Study of Multi-Answer Reading Comprehension

The multi-answer phenomenon, where a question may have multiple answers ...
research
06/02/2021

Why Machine Reading Comprehension Models Learn Shortcuts?

Recent studies report that many machine reading comprehension (MRC) mode...
research
07/25/2018

Repartitioning of the ComplexWebQuestions Dataset

Recently, Talmor and Berant (2018) introduced ComplexWebQuestions - a da...
research
05/31/2023

ActiveAED: A Human in the Loop Improves Annotation Error Detection

Manually annotated datasets are crucial for training and evaluating Natu...

Please sign up or login with your details

Forgot password? Click here to reset