Blind Backdoors in Deep Learning Models

05/08/2020
by   Eugene Bagdasaryan, et al.
26

We investigate a new method for injecting backdoors into machine learning models, based on poisoning the loss computation in the model-training code. Our attack is blind: the attacker cannot modify the training data, nor observe the execution of his code, nor access the resulting model. We develop a new technique for blind backdoor training using multi-objective optimization to achieve high accuracy on both the main and backdoor tasks while evading all known defenses. We then demonstrate the efficacy of the blind attack with new classes of backdoors strictly more powerful than those in prior literature: single-pixel backdoors in ImageNet models, backdoors that switch the model to a different, complex task, and backdoors that do not require inference-time input modifications. Finally, we discuss defenses.

READ FULL TEXT

page 1

page 3

page 4

page 8

page 11

page 12

page 13

page 14

research
02/05/2018

Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples

Deep learning algorithms and networks are vulnerable to perturbed inputs...
research
02/16/2019

A Little Is Enough: Circumventing Defenses For Distributed Learning

Distributed learning is central for large-scale training of deep-learnin...
research
01/15/2019

The Limitations of Adversarial Training and the Blind-Spot Attack

The adversarial training procedure proposed by Madry et al. (2018) is on...
research
04/21/2021

Dataset Inference: Ownership Resolution in Machine Learning

With increasingly more data and computation involved in their training, ...
research
09/15/2022

CLIPping Privacy: Identity Inference Attacks on Multi-Modal Machine Learning Models

As deep learning is now used in many real-world applications, research h...
research
10/23/2020

Customizing Triggers with Concealed Data Poisoning

Adversarial attacks alter NLP model predictions by perturbing test-time ...
research
06/12/2021

Disrupting Model Training with Adversarial Shortcuts

When data is publicly released for human consumption, it is unclear how ...

Please sign up or login with your details

Forgot password? Click here to reset