Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

09/04/2020
by   Jonas Geiping, et al.
18

Data Poisoning attacks involve an attacker modifying training data to maliciouslycontrol a model trained on this data. Previous poisoning attacks against deep neural networks have been limited in scope and success, working only in simplified settings or being prohibitively expensive for large datasets. In this work, we focus on a particularly malicious poisoning attack that is both "from scratch" and"clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data. The central mechanism of this attack is matching the gradient direction of malicious examples. We analyze why this works, supplement with practical considerations. and show its threat to real-world practitioners, finding that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset. Finally we demonstrate the limitations of existing defensive strategies against such an attack, concluding that data poisoning is a credible threat, even for large-scale deep learning systems.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 17

page 18

06/16/2021

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

As the curation of data for machine learning becomes increasingly automa...
09/12/2020

Revisiting the Threat Space for Vision-based Keystroke Inference Attacks

A vision-based keystroke inference attack is a side-channel attack in wh...
04/01/2020

MetaPoison: Practical General-purpose Clean-label Data Poisoning

Data poisoning–the process by which an attacker takes control of a model...
04/15/2021

Robust Backdoor Attacks against Deep Neural Networks in Real Physical World

Deep neural networks (DNN) have been widely deployed in various practica...
05/27/2019

Label Universal Targeted Attack

We introduce Label Universal Targeted Attack (LUTA) that makes a deep mo...
04/12/2021

Practical Defences Against Model Inversion Attacks for Split Neural Networks

We describe a threat model under which a split network-based federated l...
05/31/2021

Gradient-based Data Subversion Attack Against Binary Classifiers

Machine learning based data-driven technologies have shown impressive pe...

Code Repositories

poisoning-gradient-matching

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.