Backdoor Attacks with Input-unique Triggers in NLP

03/25/2023
by   Xukun Zhou, et al.
1

Backdoor attack aims at inducing neural models to make incorrect predictions for poison data while keeping predictions on the clean dataset unchanged, which creates a considerable threat to current natural language processing (NLP) systems. Existing backdoor attacking systems face two severe issues:firstly, most backdoor triggers follow a uniform and usually input-independent pattern, e.g., insertion of specific trigger words, synonym replacement. This significantly hinders the stealthiness of the attacking model, leading the trained backdoor model being easily identified as malicious by model probes. Secondly, trigger-inserted poisoned sentences are usually disfluent, ungrammatical, or even change the semantic meaning from the original sentence, making them being easily filtered in the pre-processing stage. To resolve these two issues, in this paper, we propose an input-unique backdoor attack(NURA), where we generate backdoor triggers unique to inputs. IDBA generates context-related triggers by continuing writing the input with a language model like GPT2. The generated sentence is used as the backdoor trigger. This strategy not only creates input-unique backdoor triggers, but also preserves the semantics of the original input, simultaneously resolving the two issues above. Experimental results show that the IDBA attack is effective for attack and difficult to defend: it achieves high attack success rate across all the widely applied benchmarks, while is immune to existing defending methods. In addition, it is able to generate fluent, grammatical, and diverse backdoor inputs, which can hardly be recognized through human inspection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2023

ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP

Backdoor attacks have emerged as a prominent threat to natural language ...
research
07/23/2021

A Differentiable Language Model Adversarial Attack on Text Classifiers

Robustness of huge Transformer-based models for natural language process...
research
06/01/2020

BadNL: Backdoor Attacks Against NLP Models

Machine learning (ML) has progressed rapidly during the past decade and ...
research
05/01/2021

Hidden Backdoors in Human-Centric Language Models

Natural language processing (NLP) systems have been proven to be vulnera...
research
11/15/2021

Triggerless Backdoor Attack for NLP Tasks with Clean Labels

Backdoor attacks pose a new threat to NLP models. A standard strategy to...
research
04/17/2021

Attacking Text Classifiers via Sentence Rewriting Sampler

Most adversarial attack methods on text classification are designed to c...

Please sign up or login with your details

Forgot password? Click here to reset