HydraText: Multi-objective Optimization for Adversarial Textual Attack

11/02/2021
by   Shengcai Liu, et al.
0

The field of adversarial textual attack has significantly grown over the last few years, where the commonly considered objective is to craft adversarial examples (AEs) that can successfully fool the target model. However, the imperceptibility of attacks, which is also an essential objective for practical attackers, is often left out by previous studies. In consequence, the crafted AEs tend to have obvious structural and semantic differences from the original human-written texts, making them easily perceptible. In this paper, we advocate simultaneously considering both objectives of successful and imperceptible attacks. Specifically, we formulate the problem of crafting AEs as a multi-objective set maximization problem, and propose a novel evolutionary algorithm (dubbed HydraText) to solve it. To the best of our knowledge, HydraText is currently the only approach that can be effectively applied to both score-based and decision-based attack settings. Exhaustive experiments involving 44237 instances demonstrate that HydraText consistently achieves higher attack success rates and better attack imperceptibility than the state-of-the-art textual attack approaches. A human evaluation study also shows that the AEs crafted by HydraText are more indistinguishable from human-written texts. Finally, these AEs exhibit good transferability and can bring notable robustness improvement to the target models by adversarial training.

READ FULL TEXT
research
09/06/2021

Efficient Combinatorial Optimization for Word-level Adversarial Textual Attack

Over the past few years, various word-level textual attack approaches ha...
research
09/19/2020

Learning to Attack: Towards Textual Adversarial Attacking in Real-world Situations

Adversarial attacking aims to fool deep neural networks with adversarial...
research
05/05/2023

White-Box Multi-Objective Adversarial Attack on Dialogue Generation

Pre-trained transformers are popular in state-of-the-art dialogue genera...
research
11/12/2022

Generating Textual Adversaries with Minimal Perturbation

Many word-level adversarial attack approaches for textual data have been...
research
03/19/2022

Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense

We proposes a novel algorithm, ANTHRO, that inductively extracts over 60...
research
06/30/2020

Generating Adversarial Examples with an Optimized Quality

Deep learning models are widely used in a range of application areas, su...
research
11/09/2021

Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search

Numerous studies have demonstrated that deep neural networks are easily ...

Please sign up or login with your details

Forgot password? Click here to reset