Dynamic Backdoor Attacks Against Machine Learning Models

03/07/2020
by   Ahmed Salem, et al.
50

Machine learning (ML) has made tremendous progress during the past decade and is being adopted in various critical real-world applications. However, recent research has shown that ML models are vulnerable to multiple security and privacy attacks. In particular, backdoor attacks against ML models that have recently raised a lot of awareness. A successful backdoor attack can cause severe consequences, such as allowing an adversary to bypass critical authentication systems. Current backdooring techniques rely on adding static triggers (with fixed patterns and locations) on ML model inputs. In this paper, we propose the first class of dynamic backdooring techniques: Random Backdoor, Backdoor Generating Network (BaN), and conditional Backdoor Generating Network (c-BaN). Triggers generated by our techniques can have random patterns and locations, which reduce the efficacy of the current backdoor detection mechanisms. In particular, BaN and c-BaN are the first two schemes that algorithmically generate triggers, which rely on a novel generative network. Moreover, c-BaN is the first conditional backdooring technique, that given a target label, it can generate a target-specific trigger. Both BaN and c-BaN are essentially a general framework which renders the adversary the flexibility for further customizing backdoor attacks. We extensively evaluate our techniques on three benchmark datasets: MNIST, CelebA, and CIFAR-10. Our techniques achieve almost perfect attack performance on backdoored data with a negligible utility loss. We further show that our techniques can bypass current state-of-the-art defense mechanisms against backdoor attacks, including Neural Cleanse, ABS, and STRIP.

READ FULL TEXT

page 1

page 2

page 7

page 8

page 10

page 11

page 12

research
06/01/2020

BadNL: Backdoor Attacks Against NLP Models

Machine learning (ML) has progressed rapidly during the past decade and ...
research
03/25/2022

Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning

Machine learning (ML) models that use deep neural networks are vulnerabl...
research
10/07/2020

Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks

Backdoor attack against deep neural networks is currently being profound...
research
05/31/2018

Defending Against Model Stealing Attacks Using Deceptive Perturbations

Machine learning models are vulnerable to simple model stealing attacks ...
research
04/24/2020

Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers

Backdoor data poisoning attacks have recently been demonstrated in compu...
research
09/04/2022

PhishClone: Measuring the Efficacy of Cloning Evasion Attacks

Web-based phishing accounts for over 90 web-browsers and security vendor...
research
06/02/2023

Poisoning Network Flow Classifiers

As machine learning (ML) classifiers increasingly oversee the automated ...

Please sign up or login with your details

Forgot password? Click here to reset