Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

11/21/2022
by   Jiafeng Wang, et al.
0

Deep neural networks are vulnerable to adversarial examples, which attach human invisible perturbations to benign inputs. Simultaneously, adversarial examples exhibit transferability under different models, which makes practical black-box attacks feasible. However, existing methods are still incapable of achieving desired transfer attack performance. In this work, from the perspective of gradient optimization and consistency, we analyze and discover the gradient elimination phenomenon as well as the local momentum optimum dilemma. To tackle these issues, we propose Global Momentum Initialization (GI) to suppress gradient elimination and help search for the global optimum. Specifically, we perform gradient pre-convergence before the attack and carry out a global search during the pre-convergence stage. Our method can be easily combined with almost all existing transfer methods, and we improve the success rate of transfer attacks significantly by an average of 6.4 advanced defense mechanisms compared to state-of-the-art methods. Eventually, we achieve an attack success rate of 95.4 of existing defense mechanisms.

READ FULL TEXT

page 1

page 2

research
05/11/2021

Improving Adversarial Transferability with Gradient Refining

Deep neural networks are vulnerable to adversarial examples, which are c...
research
10/17/2017

Boosting Adversarial Attacks with Momentum

Deep neural networks are vulnerable to adversarial examples, which poses...
research
05/10/2021

Adversarial examples attack based on random warm restart mechanism and improved Nesterov momentum

The deep learning algorithm has achieved great success in the field of c...
research
10/22/2020

Defense-guided Transferable Adversarial Attacks

Though deep neural networks perform challenging tasks excellently, they ...
research
05/27/2020

Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

Deep Neural Networks (DNNs) are well-known to be vulnerable to Adversari...
research
08/15/2023

Backpropagation Path Search On Adversarial Transferability

Deep neural networks are vulnerable to adversarial examples, dictating t...
research
04/22/2022

Enhancing the Transferability via Feature-Momentum Adversarial Attack

Transferable adversarial attack has drawn increasing attention due to th...

Please sign up or login with your details

Forgot password? Click here to reset