Backdoor Defense via Decoupling the Training Process

02/05/2022
by   Kunzhe Huang, et al.
0

Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples. The attacked model behaves normally on benign samples, whereas its prediction will be maliciously changed when the backdoor is activated. We reveal that poisoned samples tend to cluster together in the feature space of the attacked DNN model, which is mostly due to the end-to-end supervised training paradigm. Inspired by this observation, we propose a novel backdoor defense via decoupling the original end-to-end training process into three stages. Specifically, we first learn the backbone of a DNN model via self-supervised learning based on training samples without their labels. The learned backbone will map samples with the same ground-truth label to similar locations in the feature space. Then, we freeze the parameters of the learned backbone and train the remaining fully connected layers via standard training with all (labeled) training samples. Lastly, to further alleviate side-effects of poisoned samples in the second stage, we remove labels of some `low-credible' samples determined based on the learned model and conduct a semi-supervised fine-tuning of the whole model. Extensive experiments on multiple benchmark datasets and DNN models verify that the proposed defense is effective in reducing backdoor threats while preserving high accuracy in predicting benign samples. Our code is available at <https://github.com/SCLBD/DBD>.

READ FULL TEXT
research
03/13/2023

Backdoor Defense via Deconfounded Representation Learning

Deep neural networks (DNNs) are recently shown to be vulnerable to backd...
research
08/04/2022

MOVE: Effective and Harmless Ownership Verification via Embedded External Features

Currently, deep neural networks (DNNs) are widely adopted in different a...
research
11/02/2022

Backdoor Defense via Suppressing Model Shortcuts

Recent studies have demonstrated that deep neural networks (DNNs) are vu...
research
12/07/2021

Defending against Model Stealing via Verifying Embedded External Features

Obtaining a well-trained model involves expensive data collection and tr...
research
01/10/2023

Neighborhood-Regularized Self-Training for Learning with Few Labels

Training deep neural networks (DNNs) with limited supervision has been a...
research
03/16/2020

Toward Adversarial Robustness via Semi-supervised Robust Training

Adversarial examples have been shown to be the severe threat to deep neu...
research
05/09/2022

Model-Contrastive Learning for Backdoor Defense

Along with the popularity of Artificial Intelligence (AI) techniques, an...

Please sign up or login with your details

Forgot password? Click here to reset