What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors

02/26/2021
by   Jonas Geiping, et al.
0

Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time. A variety of defenses against this threat model have been proposed, but each suffers from at least one of the following flaws: they are easily overcome by adaptive attacks, they severely reduce testing performance, or they cannot generalize to diverse data poisoning threat models. Adversarial training, and its variants, is currently considered the only empirically strong defense against (inference-time) adversarial attacks. In this work, we extend the adversarial training framework to instead defend against (training-time) poisoning and backdoor attacks. Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches. We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/31/2022

Can Adversarial Training Be Manipulated By Non-Robust Features?

Adversarial training, originally designed to resist test-time adversaria...
11/18/2020

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff

Data poisoning and backdoor attacks manipulate victim models by maliciou...
12/12/2021

Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses

Adversarial training (AT) is considered to be one of the most reliable d...
02/22/2022

On the Effectiveness of Adversarial Training against Backdoor Attacks

DNNs' demand for massive data forces practitioners to collect data from ...
02/21/2022

A Tutorial on Adversarial Learning Attacks and Countermeasures

Machine learning algorithms are used to construct a mathematical model f...
08/30/2021

Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings

Adversarial robustness of deep models is pivotal in ensuring safe deploy...
04/03/2021

Mitigating Gradient-based Adversarial Attacks via Denoising and Compression

Gradient-based adversarial attacks on deep neural networks pose a seriou...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.