SemAttack: Natural Textual Attacks via Different Semantic Spaces

05/03/2022
by   Boxin Wang, et al.
0

Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial attacks. However, existing attack methods either suffer from low attack success rates or fail to search efficiently in the exponentially large perturbation space. We propose an efficient and effective framework SemAttack to generate natural adversarial text by constructing different semantic perturbation functions. In particular, SemAttack optimizes the generated perturbations constrained on generic semantic spaces, including typo space, knowledge space (e.g., WordNet), contextualized semantic space (e.g., the embedding space of BERT clusterings), or the combination of these spaces. Thus, the generated adversarial texts are more semantically close to the original inputs. Extensive experiments reveal that state-of-the-art (SOTA) large-scale LMs (e.g., DeBERTa-v2) and defense strategies (e.g., FreeLB) are still vulnerable to SemAttack. We further demonstrate that SemAttack is general and able to generate natural adversarial texts for different languages (e.g., English and Chinese) with high attack success rates. Human evaluations also confirm that our generated adversarial texts are natural and barely affect human performance. Our code is publicly available at https://github.com/AI-secure/SemAttack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2022

Generating Textual Adversaries with Minimal Perturbation

Many word-level adversarial attack approaches for textual data have been...
research
11/09/2020

Adversarial Semantic Collisions

We study semantic collisions: texts that are semantically unrelated but ...
research
04/07/2020

Towards Evaluating the Robustness of Chinese BERT Classifiers

Recent advances in large-scale language representation models such as BE...
research
11/05/2022

Textual Manifold-based Defense Against Natural Language Adversarial Examples

Recent studies on adversarial images have shown that they tend to leave ...
research
03/19/2022

Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense

We proposes a novel algorithm, ANTHRO, that inductively extracts over 60...
research
04/12/2021

Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation

Robustness and counterfactual bias are usually evaluated on a test datas...
research
03/29/2022

The Inefficiency of Language Models in Scholarly Retrieval: An Experimental Walk-through

Language models are increasingly becoming popular in AI-powered scientif...

Please sign up or login with your details

Forgot password? Click here to reset