Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers

06/10/2022
by   Nan Luo, et al.
0

Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness, researchers propose clean-label backdoor attacks, which require the adversaries not to alter the labels of the poisoned training datasets. Clean-label settings make the attack more stealthy due to the correct image-label pairs, but some problems still exist: first, traditional methods for poisoning training data are ineffective; second, traditional triggers are not stealthy which are still perceptible. To solve these problems, we propose a two-phase and image-specific triggers generation method to enhance clean-label backdoor attacks. Our methods are (1) powerful: our triggers can both promote the two phases (i.e., the backdoor implantation and activation phase) in backdoor attacks simultaneously; (2) stealthy: our triggers are generated from each image. They are image-specific instead of fixed triggers. Extensive experiments demonstrate that our approach can achieve a fantastic attack success rate (98.98 poisoning rate (5 resistant to backdoor defense methods.

READ FULL TEXT

page 3

page 5

research
08/18/2023

Poison Dart Frog: A Clean-Label Attack with Low Poisoning Rate and High Attack Success Rate in the Absence of Training Data

To successfully launch backdoor attacks, injected data needs to be corre...
research
03/06/2020

Clean-Label Backdoor Attacks on Video Recognition Models

Deep neural networks (DNNs) are vulnerable to backdoor attacks which can...
research
07/09/2022

Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain

With the broad application of deep neural networks (DNNs), backdoor atta...
research
06/14/2023

Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios

Recent deep neural networks (DNNs) have come to rely on vast amounts of ...
research
05/15/2019

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

In this paper, we explore clean-label poisoning attacks on deep convolut...
research
04/30/2023

Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks

Adversarial training (AT) is a robust learning algorithm that can defend...
research
07/01/2022

BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label

Due to its powerful feature learning capability and high efficiency, dee...

Please sign up or login with your details

Forgot password? Click here to reset