Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

12/15/2017
by   Xinyun Chen, et al.
0

Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access many security-sensitive applications like payment apps. Such usages of deep learning systems provide the adversaries with sufficient incentives to perform attacks against these systems for their adversarial purposes. In this work, we consider a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor. Specifically, the adversary aims at creating backdoor instances, so that the victim learning system will be misled to classify the backdoor instances as a target label specified by the adversary. In particular, we study backdoor poisoning attacks, which achieve backdoor attacks using poisoning strategies. Different from all existing work, our studied poisoning strategies can apply under a very weak threat model: (1) the adversary has no knowledge of the model and the training set used by the victim system; (2) the attacker is allowed to inject only a small amount of poisoning samples; (3) the backdoor key is hard to notice even by human beings to achieve stealthiness. We conduct evaluation to demonstrate that a backdoor adversary can inject only around 50 poisoning samples, while achieving an attack success rate of above 90 first work to show that a data poisoning attack can create physically implementable backdoors without touching the training process. Our work demonstrates that backdoor poisoning attacks pose real threats to a learning system, and thus highlights the importance of further investigation and proposing defense strategies against them.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

page 11

page 16

page 17

09/15/2020

Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems

Deep neural networks (DNN) have shown great success in many computer vis...
07/16/2015

Deep Learning and Music Adversaries

An adversary is essentially an algorithm intent on making a classificati...
08/30/2018

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

Deep learning models have consistently outperformed traditional machine ...
10/23/2020

Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries

Password security hinges on an accurate understanding of the techniques ...
09/12/2020

Revisiting the Threat Space for Vision-based Keystroke Inference Attacks

A vision-based keystroke inference attack is a side-channel attack in wh...
02/12/2019

A new Backdoor Attack in CNNs by training set corruption without label poisoning

Backdoor attacks against CNNs represent a new threat against deep learni...
09/04/2020

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

Data Poisoning attacks involve an attacker modifying training data to ma...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.