Visual Prompting for Adversarial Robustness

10/12/2022
by   Aochuan Chen, et al.
21

In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed, pre-trained model at testing time. Compared to conventional adversarial defenses, VP allows us to design universal (i.e., data-agnostic) input prompting templates, which have plug-and-play capabilities at testing time to achieve desired model performance without introducing much computation overhead. Although VP has been successfully applied to improving model generalization, it remains elusive whether and how it can be used to defend against adversarial attacks. We investigate this problem and show that the vanilla VP approach is not effective in adversarial defense since a universal input prompt lacks the capacity for robust learning against sample-specific adversarial perturbations. To circumvent it, we propose a new VP method, termed Class-wise Adversarial Visual Prompting (C-AVP), to generate class-wise visual prompts so as to not only leverage the strengths of ensemble prompts but also optimize their interrelations to improve model robustness. Our experiments show that C-AVP outperforms the conventional VP method, with 2.1X standard accuracy gain and 2X robust accuracy gain. Compared to classical test-time defenses, C-AVP also yields a 42X inference time speedup.

READ FULL TEXT
research
05/18/2021

Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks

Adversarial attacks optimize against models to defeat defenses. Existing...
research
07/21/2023

Fast Adaptive Test-Time Defense with Robust Features

Adaptive test-time defenses are used to improve the robustness of deep n...
research
04/01/2019

Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses

This paper focuses on learning transferable adversarial examples specifi...
research
04/19/2022

Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks

Deep neural networks have become an integral part of our software infras...
research
11/10/2022

Test-time adversarial detection and robustness for localizing humans using ultra wide band channel impulse responses

Keyless entry systems in cars are adopting neural networks for localizin...
research
02/28/2022

Evaluating the Adversarial Robustness of Adaptive Test-time Defenses

Adaptive defenses that use test-time optimization promise to improve rob...
research
07/10/2023

Enhancing Adversarial Robustness via Score-Based Optimization

Adversarial attacks have the potential to mislead deep neural network cl...

Please sign up or login with your details

Forgot password? Click here to reset