DAD++: Improved Data-free Test Time Adversarial Defense

09/10/2023
by   Gaurav Kumar Nayak, et al.
0

With the increasing deployment of deep neural networks in safety-critical applications such as self-driving cars, medical imaging, anomaly detection, etc., adversarial robustness has become a crucial concern in the reliability of these networks in real-world scenarios. A plethora of works based on adversarial training and regularization-based techniques have been proposed to make these deep networks robust against adversarial attacks. However, these methods require either retraining models or training them from scratch, making them infeasible to defend pre-trained models when access to training data is restricted. To address this problem, we propose a test time Data-free Adversarial Defense (DAD) containing detection and correction frameworks. Moreover, to further improve the efficacy of the correction framework in cases when the detector is under-confident, we propose a soft-detection scheme (dubbed as "DAD++"). We conduct a wide range of experiments and ablations on several datasets and network architectures to show the efficacy of our proposed approach. Furthermore, we demonstrate the applicability of our approach in imparting adversarial defense at test time under data-free (or data-efficient) applications/setups, such as Data-free Knowledge Distillation and Source-free Unsupervised Domain Adaptation, as well as Semi-supervised classification frameworks. We observe that in all the experiments and applications, our DAD++ gives an impressive performance against various adversarial attacks with a minimal drop in clean accuracy. The source code is available at: https://github.com/vcl-iisc/Improved-Data-free-Test-Time-Adversarial-Defense

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2022

DAD: Data-free Adversarial Defense at Test Time

Deep models are highly susceptible to adversarial attacks. Such attacks ...
research
11/10/2022

Test-time adversarial detection and robustness for localizing humans using ultra wide band channel impulse responses

Keyless entry systems in cars are adopting neural networks for localizin...
research
03/27/2023

Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder

Deep neural networks are vulnerable to backdoor attacks, where an advers...
research
09/16/2021

KATANA: Simple Post-Training Robustness Using Test Time Augmentations

Although Deep Neural Networks (DNNs) achieve excellent performance on ma...
research
06/13/2023

DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation

Backdoor attacks have emerged as an urgent threat to Deep Neural Network...
research
08/11/2023

Test-Time Adaptation for Backdoor Defense

Deep neural networks have played a crucial part in many critical domains...
research
06/27/2018

Adversarial Distillation of Bayesian Neural Network Posteriors

Bayesian neural networks (BNNs) allow us to reason about uncertainty in ...

Please sign up or login with your details

Forgot password? Click here to reset