Universal Adversarial Triggers for NLP

08/20/2019
by   Eric Wallace, et al.
0

Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94 SQuAD to be answered "to kill american people", and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models.

READ FULL TEXT
research
08/20/2019

Universal Adversarial Triggers for Attacking and Analyzing NLP

Adversarial examples highlight model vulnerabilities and are useful for ...
research
09/25/2021

MINIMAL: Mining Models for Data Free Universal Adversarial Triggers

It is well known that natural language models are vulnerable to adversar...
research
12/19/2017

HotFlip: White-Box Adversarial Examples for NLP

Adversarial examples expose vulnerabilities of machine learning models. ...
research
07/23/2017

Adversarial Examples for Evaluating Reading Comprehension Systems

Standard accuracy metrics indicate that reading comprehension systems ar...
research
04/04/2019

Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples

When humans learn to perform a difficult task (say, reading comprehensio...
research
09/06/2023

Certifying LLM Safety against Adversarial Prompting

Large language models (LLMs) released for public use incorporate guardra...
research
12/12/2021

Quantifying and Understanding Adversarial Examples in Discrete Input Spaces

Modern classification algorithms are susceptible to adversarial examples...

Please sign up or login with your details

Forgot password? Click here to reset