Enhancing the Transferability of Adversarial Attacks through Variance Tuning

03/29/2021
by   Xiaosen Wang, et al.
0

Deep neural networks are vulnerable to adversarial examples that mislead the models with imperceptible perturbations. Though adversarial attacks have achieved incredible success rates in the white-box setting, most existing adversaries often exhibit weak transferability in the black-box setting, especially under the scenario of attacking models with defense mechanisms. In this work, we propose a new method called variance tuning to enhance the class of iterative gradient based attack methods and improve their attack transferability. Specifically, at each iteration for the gradient calculation, instead of directly using the current gradient for the momentum accumulation, we further consider the gradient variance of the previous iteration to tune the current gradient so as to stabilize the update direction and escape from poor local optima. Empirical results on the standard ImageNet dataset demonstrate that our method could significantly improve the transferability of gradient-based adversarial attacks. Besides, our method could be used to attack ensemble models or be integrated with various input transformations. Incorporating variance tuning with input transformations on iterative gradient-based attacks in the multi-model setting, the integrated method could achieve an average success rate of 90.1 improving the current best attack performance significantly by 85.1 available at https://github.com/JHL-HUST/VT.

READ FULL TEXT

page 1

page 12

research
03/19/2021

Boosting Adversarial Transferability through Enhanced Momentum

Deep learning models are known to be vulnerable to adversarial examples ...
research
01/31/2021

Admix: Enhancing the Transferability of Adversarial Attacks

Although adversarial attacks have achieved incredible attack success rat...
research
08/27/2022

SA: Sliding attack for synthetic speech detection with resistance to clipping and self-splicing

Deep neural networks are vulnerable to adversarial examples that mislead...
research
11/21/2021

Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability

The black-box adversarial attack has attracted impressive attention for ...
research
03/27/2023

Improving the Transferability of Adversarial Examples via Direction Tuning

In the transfer-based adversarial attacks, adversarial examples are only...
research
07/21/2023

Improving Transferability of Adversarial Examples via Bayesian Attacks

This paper presents a substantial extension of our work published at ICL...
research
01/30/2023

Improving Adversarial Transferability with Scheduled Step Size and Dual Example

Deep neural networks are widely known to be vulnerable to adversarial ex...

Please sign up or login with your details

Forgot password? Click here to reset