Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain

07/09/2022
by   Chang Yue, et al.
0

With the broad application of deep neural networks (DNNs), backdoor attacks have gradually attracted attention. Backdoor attacks are insidious, and poisoned models perform well on benign samples and are only triggered when given specific inputs, which cause the neural network to produce incorrect outputs. The state-of-the-art backdoor attack work is implemented by data poisoning, i.e., the attacker injects poisoned samples into the dataset, and the models trained with that dataset are infected with the backdoor. However, most of the triggers used in the current study are fixed patterns patched on a small fraction of an image and are often clearly mislabeled, which is easily detected by humans or defense methods such as Neural Cleanse and SentiNet. Also, it's difficult to be learned by DNNs without mislabeling, as they may ignore small patterns. In this paper, we propose a generalized backdoor attack method based on the frequency domain, which can implement backdoor implantation without mislabeling and accessing the training process. It is invisible to human beings and able to evade the commonly used defense methods. We evaluate our approach in the no-label and clean-label cases on three datasets (CIFAR-10, STL-10, and GTSRB) with two popular scenarios (self-supervised learning and supervised learning). The results show our approach can achieve a high attack success rate (above 90 degradation on main tasks. Also, we evaluate the bypass performance of our approach for different kinds of defenses, including the detection of training data (i.e., Activation Clustering), the preprocessing of inputs (i.e., Filtering), the detection of inputs (i.e., SentiNet), and the detection of models (i.e., Neural Cleanse). The experimental results demonstrate that our approach shows excellent robustness to such defenses.

READ FULL TEXT

page 1

page 8

research
06/10/2022

Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers

Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthin...
research
12/14/2020

HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios

We have witnessed the continuing arms race between backdoor attacks and ...
research
03/23/2023

Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs

In this paper we investigate the frequency sensitivity of Deep Neural Ne...
research
07/03/2023

A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives

Backdoor attacks pose serious security threats to deep neural networks (...
research
01/31/2022

AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks

We study backdoor poisoning attacks against image classification network...
research
08/04/2021

On the Robustness of Domain Adaption to Adversarial Attacks

State-of-the-art deep neural networks (DNNs) have been proved to have ex...
research
07/05/2020

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

Recent studies have shown that DNNs can be compromised by backdoor attac...

Please sign up or login with your details

Forgot password? Click here to reset