Adversarial Analysis of Natural Language Inference Systems

12/07/2019
by   Tiffany Chien, et al.
0

The release of large natural language inference (NLI) datasets like SNLI and MNLI have led to rapid development and improvement of completely neural systems for the task. Most recently, heavily pre-trained, Transformer-based models like BERT and MT-DNN have reached near-human performance on these datasets. However, these standard datasets have been shown to contain many annotation artifacts, allowing models to shortcut understanding using simple fallible heuristics, and still perform well on the test set. So it is no surprise that many adversarial (challenge) datasets have been created that cause models trained on standard datasets to fail dramatically. Although extra training on this data generally improves model performance on just that type of data, transferring that learning to unseen examples is still partial at best. This work evaluates the failures of state-of-the-art models on existing adversarial datasets that test different linguistic phenomena, and find that even though the models perform similarly on MNLI, they differ greatly in their robustness to these attacks. In particular, we find syntax-related attacks to be particularly effective across all models, so we provide a fine-grained analysis and comparison of model performance on those examples. We draw conclusions about the value of model size and multi-task learning (beyond comparing their standard test set performance), and provide suggestions for more effective training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2020

ANLIzing the Adversarial Natural Language Inference Dataset

We perform an in-depth error analysis of Adversarial NLI (ANLI), a recen...
research
02/09/2023

Augmenting NLP data to counter Annotation Artifacts for NLI Tasks

In this paper, we explore Annotation Artifacts - the phenomena wherein l...
research
11/10/2019

Robust Natural Language Inference Models with Example Forgetting

We investigate whether example forgetting, a recently introduced measure...
research
05/11/2018

Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness

Natural Language Inference is a challenging task that has received subst...
research
02/04/2019

Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference

Machine learning systems can often achieve high performance on a test se...
research
10/16/2021

Analyzing Dynamic Adversarial Training Data in the Limit

To create models that are robust across a wide range of test inputs, tra...
research
10/13/2022

Benchmarking Long-tail Generalization with Likelihood Splits

In order to reliably process natural language, NLP systems must generali...

Please sign up or login with your details

Forgot password? Click here to reset