Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks

10/07/2020
by   Ahmed Salem, et al.
6

Backdoor attack against deep neural networks is currently being profoundly investigated due to its severe security consequences. Current state-of-the-art backdoor attacks require the adversary to modify the input, usually by adding a trigger to it, for the target model to activate the backdoor. This added trigger not only increases the difficulty of launching the backdoor attack in the physical world, but also can be easily detected by multiple defense mechanisms. In this paper, we present the first triggerless backdoor attack against deep neural networks, where the adversary does not need to modify the input for triggering the backdoor. Our attack is based on the dropout technique. Concretely, we associate a set of target neurons that are dropped out during model training with the target label. In the prediction phase, the model will output the target label when the target neurons are dropped again, i.e., the backdoor attack is launched. This triggerless feature of our attack makes it practical in the physical world. Extensive experiments show that our triggerless backdoor attack achieves a perfect attack success rate with a negligible damage to the model's utility.

READ FULL TEXT

page 5

page 6

research
01/31/2022

Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks

Recent researches demonstrate that Deep Neural Networks (DNN) models are...
research
03/07/2020

Dynamic Backdoor Attacks Against Machine Learning Models

Machine learning (ML) has made tremendous progress during the past decad...
research
03/25/2022

Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning

Machine learning (ML) models that use deep neural networks are vulnerabl...
research
11/16/2022

PBSM: Backdoor attack against Keyword spotting based on pitch boosting and sound masking

Keyword spotting (KWS) has been widely used in various speech control sc...
research
09/14/2023

Physical Invisible Backdoor Based on Camera Imaging

Backdoor attack aims to compromise a model, which returns an adversary-w...
research
09/18/2020

The Hidden Vulnerability of Watermarking for Deep Neural Networks

Watermarking has shown its effectiveness in protecting the intellectual ...
research
08/02/2019

Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

A security threat to deep neural networks (DNN) is backdoor contaminatio...

Please sign up or login with your details

Forgot password? Click here to reset