DeepAI AI Chat
Log In Sign Up

Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

by   Yi Zeng, et al.
University of Michigan
Virginia Polytechnic Institute and State University

Backdoor attacks have been considered a severe security threat to deep learning. Such attacks can make models perform abnormally on inputs with predefined triggers and still retain state-of-the-art performance on clean data. While backdoor attacks have been thoroughly investigated in the image domain from both attackers' and defenders' sides, an analysis in the frequency domain has been missing thus far. This paper first revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis. Our results show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions. We further demonstrate these high-frequency artifacts enable a simple way to detect existing backdoor triggers at a detection rate of 98.50 details and the target model. Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability. We show that existing defense works can benefit by incorporating these smooth triggers into their design consideration. Moreover, we show that the detector tuned over stronger smooth triggers can generalize well to unseen weak smooth triggers. In short, our work emphasizes the importance of considering frequency analysis when designing both backdoor attacks and defenses in deep learning.


page 3

page 4

page 7

page 11

page 12

page 14


Leveraging Frequency Analysis for Deep Fake Image Recognition

Deep neural networks can generate images that are astonishingly realisti...

Backdoor Attack through Frequency Domain

Backdoor attacks have been shown to be a serious threat against deep lea...

Bandlimiting Neural Networks Against Adversarial Attacks

In this paper, we study the adversarial attack and defence problem in de...

Generative Model Watermarking Suppressing High-Frequency Artifacts

Protecting deep neural networks (DNNs) against intellectual property (IP...

A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives

Backdoor attacks pose serious security threats to deep neural networks (...

Your Noise, My Signal: Exploiting Switching Noise for Stealthy Data Exfiltration from Desktop Computers

Attacks based on power analysis have been long existing and studied, wit...

FrePGAN: Robust Deepfake Detection Using Frequency-level Perturbations

Various deepfake detectors have been proposed, but challenges still exis...