Analyzing Dynamic Adversarial Training Data in the Limit

10/16/2021
by   Eric Wallace, et al.
0

To create models that are robust across a wide range of test inputs, training datasets should include diverse examples that span numerous phenomena. Dynamic adversarial data collection (DADC), where annotators craft examples that challenge continually improving models, holds promise as an approach for generating such diverse training sets. Prior work has shown that running DADC over 1-3 rounds can help models fix some error types, but it does not necessarily lead to better generalization beyond adversarial test data. We argue that running DADC over many rounds maximizes its training-time benefits, as the different rounds can together cover many of the task-relevant phenomena. We present the first study of longer-term DADC, where we collect 20 rounds of NLI examples for a small set of premise paragraphs, with both adversarial and non-adversarial approaches. Models trained on DADC examples make 26 errors on our expert-curated test set compared to models trained on non-adversarial data. Our analysis shows that DADC yields examples that are more difficult, more lexically and syntactically diverse, and contain fewer annotation artifacts compared to non-adversarial examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/26/2021

Deep Repulsive Prototypes for Adversarial Robustness

While many defences against adversarial examples have been proposed, fin...
12/20/2014

Explaining and Harnessing Adversarial Examples

Several machine learning models, including neural networks, consistently...
12/31/2020

Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection

We present a first-of-its-kind large synthetic training dataset for onli...
12/16/2021

Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants

In Dynamic Adversarial Data Collection (DADC), human annotators are task...
12/07/2019

Adversarial Analysis of Natural Language Inference Systems

The release of large natural language inference (NLI) datasets like SNLI...
10/27/2021

IndoNLI: A Natural Language Inference Dataset for Indonesian

We present IndoNLI, the first human-elicited NLI dataset for Indonesian....
10/16/2020

Mischief: A Simple Black-Box Attack Against Transformer Architectures

We introduce Mischief, a simple and lightweight method to produce a clas...