Backdoor Attack through Frequency Domain

11/22/2021
by   Tong Wang, et al.
32

Backdoor attacks have been shown to be a serious threat against deep learning systems such as biometric authentication and autonomous driving. An effective backdoor attack could enforce the model misbehave under certain predefined conditions, i.e., triggers, but behave normally otherwise. However, the triggers of existing attacks are directly injected in the pixel space, which tend to be detectable by existing defenses and visually identifiable at both training and inference stages. In this paper, we propose a new backdoor attack FTROJAN through trojaning the frequency domain. The key intuition is that triggering perturbations in the frequency domain correspond to small pixel-wise perturbations dispersed across the entire image, breaking the underlying assumptions of existing defenses and making the poisoning images visually indistinguishable from clean ones. We evaluate FTROJAN in several datasets and tasks showing that it achieves a high attack success rate without significantly degrading the prediction accuracy on benign inputs. Moreover, the poisoning images are nearly invisible and retain high perceptual quality. We also evaluate FTROJAN against state-of-the-art defenses as well as several adaptive defenses that are designed on the frequency domain. The results show that FTROJAN can robustly elude or significantly degenerate the performance of these defenses.

READ FULL TEXT

page 2

page 3

page 4

page 6

page 10

page 15

research
01/26/2021

Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers

Recently, physical domain adversarial attacks have drawn significant att...
research
05/10/2023

Stealthy Low-frequency Backdoor Attack against Deep Neural Networks

Deep neural networks (DNNs) have gain its popularity in various scenario...
research
04/07/2021

Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

Backdoor attacks have been considered a severe security threat to deep l...
research
02/09/2023

Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder

The backdoor attack poses a new security threat to deep neural networks....
research
01/29/2018

Certified Defenses against Adversarial Examples

While neural networks have achieved high accuracy on standard image clas...
research
07/03/2023

A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives

Backdoor attacks pose serious security threats to deep neural networks (...
research
10/06/2020

Downscaling Attack and Defense: Turning What You See Back Into What You Get

The resizing of images, which is typically a required part of preprocess...

Please sign up or login with your details

Forgot password? Click here to reset