Test-Time Adaptation for Backdoor Defense

08/11/2023
by   Jiyang Guan, et al.
0

Deep neural networks have played a crucial part in many critical domains, such as autonomous driving, face recognition, and medical diagnosis. However, deep neural networks are facing security threats from backdoor attacks and can be manipulated into attacker-decided behaviors by the backdoor attacker. To defend the backdoor, prior research has focused on using clean data to remove backdoor attacks before model deployment. In this paper, we investigate the possibility of defending against backdoor attacks at test time by utilizing partially poisoned data to remove the backdoor from the model. To address the problem, a two-stage method Test-Time Backdoor Defense (TTBD) is proposed. In the first stage, we propose two backdoor sample detection methods, namely DDP and TeCo, to identify poisoned samples from a batch of mixed, partially poisoned samples. Once the poisoned samples are detected, we employ Shapley estimation to calculate the contribution of each neuron's significance in the network, locate the poisoned neurons, and prune them to remove backdoor in the models. Our experiments demonstrate that TTBD removes the backdoor successfully with only a batch of partially poisoned data across different model architectures and datasets against different types of backdoor attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2021

Few-shot Backdoor Defense Using Shapley Estimation

Deep neural networks have achieved impressive performance in a variety o...
research
03/27/2023

Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency

Deep neural networks are proven to be vulnerable to backdoor attacks. De...
research
11/16/2021

An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences

Together with impressive advances touching every aspect of our society, ...
research
03/01/2019

TamperNN: Efficient Tampering Detection of Deployed Neural Nets

Neural networks are powering the deployment of embedded devices and Inte...
research
09/10/2023

DAD++: Improved Data-free Test Time Adversarial Defense

With the increasing deployment of deep neural networks in safety-critica...
research
01/31/2022

AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks

We study backdoor poisoning attacks against image classification network...
research
04/04/2022

DAD: Data-free Adversarial Defense at Test Time

Deep models are highly susceptible to adversarial attacks. Such attacks ...

Please sign up or login with your details

Forgot password? Click here to reset