MINIMAL: Mining Models for Data Free Universal Adversarial Triggers

09/25/2021
by   Swapnil Parekh, et al.
21

It is well known that natural language models are vulnerable to adversarial attacks, which are mostly input-specific in nature. Recently, it has been shown that there also exist input-agnostic attacks in NLP models, called universal adversarial triggers. However, existing methods to craft universal triggers are data intensive. They require large amounts of data samples to generate adversarial triggers, which are typically inaccessible by attackers. For instance, previous works take 3000 data samples per class for the SNLI dataset to generate adversarial triggers. In this paper, we present a novel data-free approach, MINIMAL, to mine input-agnostic adversarial triggers from models. Using the triggers produced with our data-free algorithm, we reduce the accuracy of Stanford Sentiment Treebank's positive class from 93.6 Similarly, for the Stanford Natural Language Inference (SNLI), our single-word trigger reduces the accuracy of the entailment class from 90.95 0.6%. Despite being completely data-free, we get equivalent accuracy drops as data-dependent methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/01/2020

Universal Adversarial Attacks with Natural Triggers for Text Classification

Recent work has demonstrated the vulnerability of modern text classifier...
research
09/01/2023

Why do universal adversarial attacks work on large language models?: Geometry might be the answer

Transformer based large language models with emergent capabilities are b...
research
09/17/2020

Generating Label Cohesive and Well-Formed Adversarial Claims

Adversarial attacks reveal important vulnerabilities and flaws of traine...
research
08/20/2019

Universal Adversarial Triggers for NLP

Adversarial examples highlight model vulnerabilities and are useful for ...
research
08/20/2019

Universal Adversarial Triggers for Attacking and Analyzing NLP

Adversarial examples highlight model vulnerabilities and are useful for ...
research
01/22/2019

Universal Rules for Fooling Deep Neural Networks based Text Classification

Recently, deep learning based natural language processing techniques are...
research
08/03/2018

Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions

Deep learning models are susceptible to input specific noise, called adv...

Please sign up or login with your details

Forgot password? Click here to reset