DAD: Data-free Adversarial Defense at Test Time

04/04/2022
by   Gaurav Kumar Nayak, et al.
4

Deep models are highly susceptible to adversarial attacks. Such attacks are carefully crafted imperceptible noises that can fool the network and can cause severe consequences when deployed. To encounter them, the model requires training data for adversarial training or explicit regularization-based techniques. However, privacy has become an important concern, restricting access to only trained models but not the training data (e.g. biometric data). Also, data curation is expensive and companies may have proprietary rights over it. To handle such situations, we propose a completely novel problem of 'test-time adversarial defense in absence of training data and even their statistics'. We solve it in two stages: a) detection and b) correction of adversarial samples. Our adversarial sample detection framework is initially trained on arbitrary data and is subsequently adapted to the unlabelled test data through unsupervised domain adaptation. We further correct the predictions on detected adversarial samples by transforming them in Fourier domain and obtaining their low frequency component at our proposed suitable radius for model prediction. We demonstrate the efficacy of our proposed technique via extensive experiments against several adversarial attacks and for different model architectures and datasets. For a non-robust Resnet-18 model pre-trained on CIFAR-10, our detection method correctly identifies 91.42 Also, we significantly improve the adversarial accuracy from 0 a minimal drop of 0.02 without having to retrain the model.

READ FULL TEXT

page 2

page 8

page 11

research
09/10/2023

DAD++: Improved Data-free Test Time Adversarial Defense

With the increasing deployment of deep neural networks in safety-critica...
research
03/22/2023

Test-time Defense against Adversarial Attacks: Detection and Reconstruction of Adversarial Examples via Masked Autoencoder

Existing defense methods against adversarial attacks can be categorized ...
research
02/20/2020

Towards Certifiable Adversarial Sample Detection

Convolutional Neural Networks (CNNs) are deployed in more and more class...
research
08/29/2023

Advancing Adversarial Robustness Through Adversarial Logit Update

Deep Neural Networks are susceptible to adversarial perturbations. Adver...
research
11/10/2022

Test-time adversarial detection and robustness for localizing humans using ultra wide band channel impulse responses

Keyless entry systems in cars are adopting neural networks for localizin...
research
08/11/2023

Test-Time Adaptation for Backdoor Defense

Deep neural networks have played a crucial part in many critical domains...
research
04/06/2023

Reliable Learning for Test-time Attacks and Distribution Shift

Machine learning algorithms are often used in environments which are not...

Please sign up or login with your details

Forgot password? Click here to reset