Kallima: A Clean-label Framework for Textual Backdoor Attacks

06/03/2022
by   Xiaoyi Chen, et al.
0

Although Deep Neural Network (DNN) has led to unprecedented progress in various natural language processing (NLP) tasks, research shows that deep models are extremely vulnerable to backdoor attacks. The existing backdoor attacks mainly inject a small number of poisoned samples into the training dataset with the labels changed to the target one. Such mislabeled samples would raise suspicion upon human inspection, potentially revealing the attack. To improve the stealthiness of textual backdoor attacks, we propose the first clean-label framework Kallima for synthesizing mimesis-style backdoor samples to develop insidious textual backdoor attacks. We modify inputs belonging to the target class with adversarial perturbations, making the model rely more on the backdoor trigger. Our framework is compatible with most existing backdoor triggers. The experimental results on three benchmark datasets demonstrate the effectiveness of the proposed method.

READ FULL TEXT

page 8

page 9

research
12/05/2019

Label-Consistent Backdoor Attacks

Deep neural networks have been demonstrated to be vulnerable to backdoor...
research
03/03/2023

NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning

At present, backdoor attacks attract attention as they do great harm to ...
research
10/15/2021

Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks

Backdoor attacks are a kind of emergent security threat in deep learning...
research
11/15/2021

Triggerless Backdoor Attack for NLP Tasks with Clean Labels

Backdoor attacks pose a new threat to NLP models. A standard strategy to...
research
08/04/2023

ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP

Backdoor attacks have emerged as a prominent threat to natural language ...
research
03/20/2023

Did You Train on My Dataset? Towards Public Dataset Protection with Clean-Label Backdoor Watermarking

The huge supporting training data on the Internet has been a key factor ...
research
10/13/2022

COLLIDER: A Robust Training Framework for Backdoor Data

Deep neural network (DNN) classifiers are vulnerable to backdoor attacks...

Please sign up or login with your details

Forgot password? Click here to reset