Exploring Non-additive Randomness on ViT against Query-Based Black-Box Attacks

09/12/2023
by   Jindong Gu, et al.
0

Deep Neural Networks can be easily fooled by small and imperceptible perturbations. The query-based black-box attack (QBBA) is able to create the perturbations using model output probabilities of image queries requiring no access to the underlying models. QBBA poses realistic threats to real-world applications. Recently, various types of robustness have been explored to defend against QBBA. In this work, we first taxonomize the stochastic defense strategies against QBBA. Following our taxonomy, we propose to explore non-additive randomness in models to defend against QBBA. Specifically, we focus on underexplored Vision Transformers based on their flexible architectures. Extensive experiments show that the proposed defense approach achieves effective defense, without much sacrifice in performance.

READ FULL TEXT

page 1

page 15

research
01/13/2021

Small Input Noise is Enough to Defend Against Query-based Black-box Attacks

While deep neural networks show unprecedented performance in various tas...
research
04/23/2021

Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks

The query-based black-box attacks, which don't require any knowledge abo...
research
04/13/2023

Certified Zeroth-order Black-Box Defense with Robust UNet Denoiser

Certified defense methods against adversarial perturbations have been re...
research
05/24/2022

Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

The score-based query attacks (SQAs) pose practical threats to deep neur...
research
08/12/2022

Unifying Gradients to Improve Real-world Robustness for Deep Networks

The wide application of deep neural networks (DNNs) demands an increasin...
research
09/04/2023

Efficient Defense Against Model Stealing Attacks on Convolutional Neural Networks

Model stealing attacks have become a serious concern for deep learning m...
research
01/01/2022

Rethinking Feature Uncertainty in Stochastic Neural Networks for Adversarial Robustness

It is well-known that deep neural networks (DNNs) have shown remarkable ...

Please sign up or login with your details

Forgot password? Click here to reset