Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

05/24/2022
by   Sizhe Chen, et al.
0

The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores. Nonetheless, we note that if the loss trend of the outputs is slightly perturbed, SQAs could be easily misled and thereby become much less effective. Following this idea, we propose a novel defense, namely Adversarial Attack on Attackers (AAA), to confound SQAs towards incorrect attack directions by slightly modifying the output logits. In this way, (1) SQAs are prevented regardless of the model's worst-case robustness; (2) the original model predictions are hardly changed, i.e., no degradation on clean accuracy; (3) the calibration of confidence scores can be improved simultaneously. Extensive experiments are provided to verify the above advantages. For example, by setting ℓ_∞=8/255 on CIFAR-10, our proposed AAA helps WideResNet-28 secure 80.59% accuracy under Square attack (2500 queries), while the best prior defense (i.e., adversarial training) only attains 67.44%. Since AAA attacks SQA's general greedy strategy, such advantages of AAA over 8 defenses can be consistently observed on 8 CIFAR-10/ImageNet models under 6 SQAs, using different attack targets and bounds. Moreover, AAA calibrates better without hurting the accuracy. Our code would be released.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2022

Unifying Gradients to Improve Real-world Robustness for Deep Networks

The wide application of deep neural networks (DNNs) demands an increasin...
research
05/28/2018

GenAttack: Practical Black-box Attacks with Gradient-Free Optimization

Deep neural networks (DNNs) are vulnerable to adversarial examples, even...
research
07/03/2023

Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems

With the emergence of large foundational models, model-serving systems a...
research
06/28/2021

Data Poisoning Won't Save You From Facial Recognition

Data poisoning has been proposed as a compelling defense against facial ...
research
09/12/2023

Exploring Non-additive Randomness on ViT against Query-Based Black-Box Attacks

Deep Neural Networks can be easily fooled by small and imperceptible per...
research
06/27/2021

ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense

K-Nearest Neighbor (kNN)-based deep learning methods have been applied t...
research
06/04/2021

BO-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization

Decision-based attacks (DBA), wherein attackers perturb inputs to spoof ...

Please sign up or login with your details

Forgot password? Click here to reset