Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

12/15/2017
by   Xinyun Chen, et al.
0

Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access many security-sensitive applications like payment apps. Such usages of deep learning systems provide the adversaries with sufficient incentives to perform attacks against these systems for their adversarial purposes. In this work, we consider a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor. Specifically, the adversary aims at creating backdoor instances, so that the victim learning system will be misled to classify the backdoor instances as a target label specified by the adversary. In particular, we study backdoor poisoning attacks, which achieve backdoor attacks using poisoning strategies. Different from all existing work, our studied poisoning strategies can apply under a very weak threat model: (1) the adversary has no knowledge of the model and the training set used by the victim system; (2) the attacker is allowed to inject only a small amount of poisoning samples; (3) the backdoor key is hard to notice even by human beings to achieve stealthiness. We conduct evaluation to demonstrate that a backdoor adversary can inject only around 50 poisoning samples, while achieving an attack success rate of above 90 first work to show that a data poisoning attack can create physically implementable backdoors without touching the training process. Our work demonstrates that backdoor poisoning attacks pose real threats to a learning system, and thus highlights the importance of further investigation and proposing defense strategies against them.

READ FULL TEXT

page 1

page 7

page 11

page 16

page 17

research
09/15/2020

Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems

Deep neural networks (DNN) have shown great success in many computer vis...
research
07/16/2015

Deep Learning and Music Adversaries

An adversary is essentially an algorithm intent on making a classificati...
research
08/30/2018

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

Deep learning models have consistently outperformed traditional machine ...
research
11/28/2019

Towards Privacy and Security of Deep Learning Systems: A Survey

Deep learning has gained tremendous success and great popularity in the ...
research
08/22/2022

On Deep Learning in Password Guessing, a Survey

The security of passwords is dependent on a thorough understanding of th...
research
02/23/2023

More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models

We are currently witnessing dramatic advances in the capabilities of Lar...
research
03/21/2019

Adversarial camera stickers: A physical camera-based attack on deep learning systems

Recent work has thoroughly documented the susceptibility of deep learnin...

Please sign up or login with your details

Forgot password? Click here to reset