Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

by   Ali Shafahi, et al.

Data poisoning is a type of adversarial attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores a broad class of poisoning attacks on neural nets. The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data. They are also targeted; they control the behavior of the classifier on a specific test instance without noticeably degrading classifier performance on other instances. For example, an attacker could add a seemingly innocuous image (that is properly labeled) to a training set for a face recognition engine, and control the identity of a chosen person at test time. Because the attacker does not need to control the labeling function, poisons could be entered into the training set simply by putting them online and waiting for them to be scraped by a data collection bot. We present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used. For full end-to-end training, we present a "watermarking" strategy that makes poisoning reliable using multiple ( 50) poisoned training instances. We demonstrate our method by generating poisoned frog images from the CIFAR dataset and using them to manipulate image classifiers.


page 6

page 9

page 14

page 15

page 16


Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing

Backdoor data poisoning is an emerging form of adversarial attack usuall...

Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder

In this work, we consider one challenging training time attack by modify...

Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation

Targeted training-set attacks inject malicious instances into the traini...

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

One of the most concerning threats for modern AI systems is data poisoni...

Redactor: Targeted Disinformation Generation using Probabilistic Decision Boundaries

Information leakage is becoming a critical problem as various informatio...

Poisoning and Backdooring Contrastive Learning

Contrastive learning methods like CLIP train on noisy and uncurated trai...

An anomaly detection approach for backdoored neural networks: face recognition as a case study

Backdoor attacks allow an attacker to embed functionality jeopardizing p...

Please sign up or login with your details

Forgot password? Click here to reset