Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class

10/17/2022
by   Khoa D. Doan, et al.
0

In recent years, machine learning models have been shown to be vulnerable to backdoor attacks. Under such attacks, an adversary embeds a stealthy backdoor into the trained model such that the compromised models will behave normally on clean inputs but will misclassify according to the adversary's control on maliciously constructed input with a trigger. While these existing attacks are very effective, the adversary's capability is limited: given an input, these attacks can only cause the model to misclassify toward a single pre-defined or target class. In contrast, this paper exploits a novel backdoor attack with a much more powerful payload, denoted as Marksman, where the adversary can arbitrarily choose which target class the model will misclassify given any input during inference. To achieve this goal, we propose to represent the trigger function as a class-conditional generative model and to inject the backdoor in a constrained optimization framework, where the trigger function learns to generate an optimal trigger pattern to attack any target class at will while simultaneously embedding this generative backdoor into the trained model. Given the learned trigger-generation function, during inference, the adversary can specify an arbitrary backdoor attack target class, and an appropriate trigger causing the model to classify toward this target class is created accordingly. We show empirically that the proposed framework achieves high attack performance while preserving the clean-data performance in several benchmark datasets, including MNIST, CIFAR10, GTSRB, and TinyImageNet. The proposed Marksman backdoor attack can also easily bypass existing backdoor defenses that were originally designed against backdoor attacks with a single target class. Our work takes another significant step toward understanding the extensive risks of backdoor attacks in practice.

READ FULL TEXT

page 2

page 9

page 10

research
10/06/2020

BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models

The tremendous progress of autoencoders and generative adversarial netwo...
research
08/31/2023

Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack

The vulnerabilities to backdoor attacks have recently threatened the tru...
research
12/05/2019

Label-Consistent Backdoor Attacks

Deep neural networks have been demonstrated to be vulnerable to backdoor...
research
10/12/2022

Understanding Impacts of Task Similarity on Backdoor Attack and Detection

With extensive studies on backdoor attack and detection, still fundament...
research
03/25/2022

Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning

Machine learning (ML) models that use deep neural networks are vulnerabl...
research
07/13/2022

Game of Trojans: A Submodular Byzantine Approach

Machine learning models in the wild have been shown to be vulnerable to ...
research
08/02/2019

Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

A security threat to deep neural networks (DNN) is backdoor contaminatio...

Please sign up or login with your details

Forgot password? Click here to reset