Check Your Other Door! Establishing Backdoor Attacks in the Frequency Domain

Deep Neural Networks (DNNs) have been utilized in various applications ranging from image classification and facial recognition to medical imagery analysis and real-time object detection. As our models become more sophisticated and complex, the computational cost of training such models becomes a burden for small companies and individuals; for this reason, outsourcing the training process has been the go-to option for such users. Unfortunately, outsourcing the training process comes at the cost of vulnerability to backdoor attacks. These attacks aim at establishing hidden backdoors in the DNN such that the model performs well on benign samples but outputs a particular target label when a trigger is applied to the input. Current backdoor attacks rely on generating triggers in the image/pixel domain; however, as we show in this paper, it is not the only domain to exploit and one should always "check the other doors". In this work, we propose a complete pipeline for generating a dynamic, efficient, and invisible backdoor attack in the frequency domain. We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks through extensive experiments on various datasets and network architectures. The backdoored models are shown to break various state-of-the-art defences. We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them. We conclude the work with some remarks regarding a network's learning capacity and the capability of embedding a backdoor attack in the model.

READ FULL TEXT

page 1

page 3

page 4

page 6

research
01/03/2023

Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition

Deep neural networks (DNNs) are vulnerable to a class of attacks called ...
research
12/07/2020

Backdoor Attack with Sample-Specific Triggers

Recently, backdoor attacks pose a new security threat to the training pr...
research
07/30/2021

Can You Hear It? Backdoor Attacks via Ultrasonic Triggers

Deep neural networks represent a powerful option for many real-world app...
research
05/10/2023

Towards Invisible Backdoor Attacks in the Frequency Domain against Deep Neural Networks

Deep neural networks (DNNs) have made tremendous progress in the past te...
research
05/24/2023

Sharpness-Aware Data Poisoning Attack

Recent research has highlighted the vulnerability of Deep Neural Network...
research
04/05/2021

Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models

Advances in deep neural networks (DNNs) have shown tremendous promise in...
research
05/28/2021

Chromatic and spatial analysis of one-pixel attacks against an image classifier

One-pixel attack is a curious way of deceiving neural network classifier...

Please sign up or login with your details

Forgot password? Click here to reset