One-shot Neural Backdoor Erasing via Adversarial Weight Masking

07/10/2022
by   Shuwen Chai, et al.
0

Recent studies show that despite achieving high accuracy on a number of real-world applications, deep neural networks (DNNs) can be backdoored: by injecting triggered data samples into the training dataset, the adversary can mislead the trained model into classifying any test data to the target class as long as the trigger pattern is presented. To nullify such backdoor threats, various methods have been proposed. Particularly, a line of research aims to purify the potentially compromised model. However, one major limitation of this line of work is the requirement to access sufficient original training data: the purifying performance is a lot worse when the available training data is limited. In this work, we propose Adversarial Weight Masking (AWM), a novel method capable of erasing the neural backdoors even in the one-shot setting. The key idea behind our method is to formulate this into a min-max optimization problem: first, adversarially recover the trigger patterns and then (soft) mask the network weights that are sensitive to the recovered patterns. Comprehensive evaluations of several benchmark datasets suggest that AWM can largely improve the purifying effects over other state-of-the-art methods on various available training dataset sizes.

READ FULL TEXT

page 15

page 16

research
05/31/2022

Few-Shot Unlearning by Model Inversion

We consider the problem of machine unlearning to erase a target dataset,...
research
12/03/2019

The Knowledge Within: Methods for Data-Free Model Compression

Background: Recently, an extensive amount of research has been focused o...
research
03/13/2020

A Privacy-Preserving DNN Pruning and Mobile Acceleration Framework

To facilitate the deployment of deep neural networks (DNNs) on resource-...
research
06/17/2022

Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets

Deep neural networks usually perform poorly when the training dataset su...
research
03/18/2023

Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural Networks

Deep neural networks (DNNs) are often trained on the premise that the co...
research
07/05/2020

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

Recent studies have shown that DNNs can be compromised by backdoor attac...

Please sign up or login with your details

Forgot password? Click here to reset