Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder

02/09/2023
by   Jiliang Zhang, et al.
0

The backdoor attack poses a new security threat to deep neural networks. Existing backdoor often relies on visible universal trigger to make the backdoored model malfunction, which are not only usually visually suspicious to human but also catchable by mainstream countermeasures. We propose an imperceptible sample-specific backdoor that the trigger varies from sample to sample and invisible. Our trigger generation is automated through a desnoising autoencoder that is fed with delicate but pervasive features (i.e., edge patterns per images). We extensively experiment our backdoor attack on ImageNet and MS-Celeb-1M, which demonstrates stable and nearly 100 success rate with negligible impact on the clean data accuracy of the infected model. The denoising autoeconder based trigger generator is reusable or transferable across tasks (e.g., from ImageNet to MS-Celeb-1M), whilst the trigger has high exclusiveness (i.e., a trigger generated for one sample is not applicable to another sample). Besides, our proposed backdoored model has achieved high evasiveness against mainstream backdoor defenses such as Neural Cleanse, STRIP, SentiNet and Fine-Pruning.

READ FULL TEXT

page 2

page 4

page 5

page 7

research
11/15/2022

Backdoor Attacks on Time Series: A Generative Approach

Backdoor attacks have emerged as one of the major security threats to de...
research
12/07/2020

Backdoor Attack with Sample-Specific Triggers

Recently, backdoor attacks pose a new security threat to the training pr...
research
11/22/2021

Backdoor Attack through Frequency Domain

Backdoor attacks have been shown to be a serious threat against deep lea...
research
02/25/2023

SATBA: An Invisible Backdoor Attack Based On Spatial Attention

As a new realm of AI security, backdoor attack has drew growing attentio...
research
05/11/2021

Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds

This paper presents Poisoning MorphNet, the first backdoor attack method...
research
03/03/2017

Generative Poisoning Attack Method Against Neural Networks

Poisoning attack is identified as a severe security threat to machine le...
research
01/21/2021

A Person Re-identification Data Augmentation Method with Adversarial Defense Effect

The security of the Person Re-identification(ReID) model plays a decisiv...

Please sign up or login with your details

Forgot password? Click here to reset