Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models

12/09/2022
by   Rui Zhu, et al.
0

In this paper, we present a simple yet surprisingly effective technique to induce "selective amnesia" on a backdoored model. Our approach, called SEAM, has been inspired by the problem of catastrophic forgetting (CF), a long standing issue in continual learning. Our idea is to retrain a given DNN model on randomly labeled clean data, to induce a CF on the model, leading to a sudden forget on both primary and backdoor tasks; then we recover the primary task by retraining the randomized model on correctly labeled clean data. We analyzed SEAM by modeling the unlearning process as continual learning and further approximating a DNN using Neural Tangent Kernel for measuring CF. Our analysis shows that our random-labeling approach actually maximizes the CF on an unknown backdoor in the absence of triggered inputs, and also preserves some feature extraction in the network to enable a fast revival of the primary task. We further evaluated SEAM on both image processing and Natural Language Processing tasks, under both data contamination and training manipulation attacks, over thousands of models either trained on popular image datasets or provided by the TrojAI competition. Our experiments show that SEAM vastly outperforms the state-of-the-art unlearning techniques, achieving a high Fidelity (measuring the gap between the accuracy of the primary task and that of the backdoor) within a few minutes (about 30 times faster than training a model from scratch using the MNIST dataset), with only a small amount of clean data (0.1

READ FULL TEXT
research
05/08/2020

Continual Learning Using Task Conditional Neural Networks

Conventional deep learning models have limited capacity in learning mult...
research
05/16/2022

Continual learning on 3D point clouds with random compressed rehearsal

Contemporary deep neural networks offer state-of-the-art results when ap...
research
07/28/2021

Task-Specific Normalization for Continual Learning of Blind Image Quality Models

The computational vision community has recently paid attention to contin...
research
04/05/2022

Attention Distraction: Watermark Removal Through Continual Learning with Selective Forgetting

Fine-tuning attacks are effective in removing the embedded watermarks in...
research
06/06/2021

Boosting a Model Zoo for Multi-Task and Continual Learning

Leveraging data from multiple tasks, either all at once, or incrementall...
research
10/07/2022

Learnware: Small Models Do Big

There are complaints about current machine learning techniques such as t...

Please sign up or login with your details

Forgot password? Click here to reset