Second-Order NLP Adversarial Examples

10/05/2020
by   John X. Morris, et al.
0

Adversarial example generation methods in NLP rely on models like language models or sentence encoders to determine if potential adversarial examples are valid. In these methods, a valid adversarial example fools the model being attacked, and is determined to be semantically or syntactically valid by a second model. Research to date has counted all such examples as errors by the attacked model. We contend that these adversarial examples may not be flaws in the attacked model, but flaws in the model that determines validity. We term such invalid inputs second-order adversarial examples. We propose the constraint robustness curve and associated metric ACCS as tools for evaluating the robustness of a constraint to second-order adversarial examples. To generate this curve, we design an adversarial attack to run directly on the semantic similarity models. We test on two constraints, the Universal Sentence Encoder (USE) and BERTScore. Our findings indicate that such second-order examples exist, but are typically less common than first-order adversarial examples in state-of-the-art models. They also indicate that USE is effective as constraint on NLP adversarial examples, while BERTScore is nearly ineffectual. Code for running the experiments in this paper is available at https://github.com/jxmorris12/second-order-adversarial-examples.

READ FULL TEXT
research
07/04/2022

Hessian-Free Second-Order Adversarial Examples for Adversarial Learning

Recent studies show deep neural networks (DNNs) are extremely vulnerable...
research
12/26/2021

Perlin Noise Improve Adversarial Robustness

Adversarial examples are some special input that can perturb the output ...
research
04/25/2020

Reevaluating Adversarial Examples in Natural Language

State-of-the-art attacks on NLP models have different definitions of wha...
research
09/01/2021

Towards Improving Adversarial Training of NLP Models

Adversarial training, a method for learning robust deep neural networks,...
research
01/22/2020

Elephant in the Room: An Evaluation Framework for Assessing Adversarial Examples in NLP

An adversarial example is an input transformed by small perturbations th...
research
03/20/2020

One Neuron to Fool Them All

Despite vast research in adversarial examples, the root causes of model ...
research
05/11/2023

Randomized Smoothing with Masked Inference for Adversarially Robust Text Classifications

Large-scale pre-trained language models have shown outstanding performan...

Please sign up or login with your details

Forgot password? Click here to reset