Adversarial Unlearning of Backdoors via Implicit Hypergradient

10/07/2021
by   Yi Zeng, et al.
0

We propose a minimax formulation for removing backdoors from a given poisoned model based on a small set of clean data. This formulation encompasses much of prior work on backdoor removal. We propose the Implicit Bacdoor Adversarial Unlearning (I-BAU) algorithm to solve the minimax. Unlike previous work, which breaks down the minimax into separate inner and outer problems, our algorithm utilizes the implicit hypergradient to account for the interdependence between inner and outer optimization. We theoretically analyze its convergence and the generalizability of the robustness gained by solving minimax on clean data to unseen test data. In our evaluation, we compare I-BAU with six state-of-art backdoor defenses on seven backdoor attacks over two datasets and various attack settings, including the common setting where the attacker targets one class as well as important but underexplored settings where multiple classes are targeted. I-BAU's performance is comparable to and most often significantly better than the best baseline. Particularly, its performance is more robust to the variation on triggers, attack settings, poison ratio, and clean data size. Moreover, I-BAU requires less computation to take effect; particularly, it is more than 13× faster than the most efficient baseline in the single-target attack setting. Furthermore, it can remain effective in the extreme case where the defender can only access 100 clean samples – a setting where all the baselines fail to produce acceptable results.

READ FULL TEXT

page 21

page 23

research
11/03/2022

M-to-N Backdoor Paradigm: A Stealthy and Fuzzy Attack to Deep Learning Models

Recent studies show that deep neural networks (DNNs) are vulnerable to b...
research
02/14/2020

Adversarial Distributional Training for Robust Deep Learning

Adversarial training (AT) is among the most effective techniques to impr...
research
10/20/2020

L-RED: Efficient Post-Training Detection of Imperceptible Backdoor Attacks without Access to the Training Set

Backdoor attacks (BAs) are an emerging form of adversarial attack typica...
research
02/28/2023

FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases

Trojan attack on deep neural networks, also known as backdoor attack, is...
research
09/06/2021

Robustness and Generalization via Generative Adversarial Training

While deep neural networks have achieved remarkable success in various c...
research
02/22/2023

ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms

Backdoor data detection is traditionally studied in an end-to-end superv...
research
11/12/2021

Adversarially Robust Learning for Security-Constrained Optimal Power Flow

In recent years, the ML community has seen surges of interest in both ad...

Please sign up or login with your details

Forgot password? Click here to reset