Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

04/07/2021
by   Yi Zeng, et al.
5

Backdoor attacks have been considered a severe security threat to deep learning. Such attacks can make models perform abnormally on inputs with predefined triggers and still retain state-of-the-art performance on clean data. While backdoor attacks have been thoroughly investigated in the image domain from both attackers' and defenders' sides, an analysis in the frequency domain has been missing thus far. This paper first revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis. Our results show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions. We further demonstrate these high-frequency artifacts enable a simple way to detect existing backdoor triggers at a detection rate of 98.50 details and the target model. Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability. We show that existing defense works can benefit by incorporating these smooth triggers into their design consideration. Moreover, we show that the detector tuned over stronger smooth triggers can generalize well to unseen weak smooth triggers. In short, our work emphasizes the importance of considering frequency analysis when designing both backdoor attacks and defenses in deep learning.

READ FULL TEXT

page 3

page 4

page 7

page 11

page 12

page 14

research
03/19/2020

Leveraging Frequency Analysis for Deep Fake Image Recognition

Deep neural networks can generate images that are astonishingly realisti...
research
11/22/2021

Backdoor Attack through Frequency Domain

Backdoor attacks have been shown to be a serious threat against deep lea...
research
05/30/2019

Bandlimiting Neural Networks Against Adversarial Attacks

In this paper, we study the adversarial attack and defence problem in de...
research
05/21/2023

Generative Model Watermarking Suppressing High-Frequency Artifacts

Protecting deep neural networks (DNNs) against intellectual property (IP...
research
07/03/2023

A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives

Backdoor attacks pose serious security threats to deep neural networks (...
research
01/18/2020

Your Noise, My Signal: Exploiting Switching Noise for Stealthy Data Exfiltration from Desktop Computers

Attacks based on power analysis have been long existing and studied, wit...
research
02/07/2022

FrePGAN: Robust Deepfake Detection Using Frequency-level Perturbations

Various deepfake detectors have been proposed, but challenges still exis...

Please sign up or login with your details

Forgot password? Click here to reset