Going Far Boosts Attack Transferability, but Do Not Do It

02/20/2021
by   Sizhe Chen, et al.
0

Deep Neural Networks (DNNs) could be easily fooled by Adversarial Examples (AEs) with an imperceptible difference to original ones in human eyes. Also, the AEs from attacking one surrogate DNN tend to cheat other black-box DNNs as well, i.e., the attack transferability. Existing works reveal that adopting certain optimization algorithms in attack improves transferability, but the underlying reasons have not been thoroughly studied. In this paper, we investigate the impacts of optimization on attack transferability by comprehensive experiments concerning 7 optimization algorithms, 4 surrogates, and 9 black-box models. Through the thorough empirical analysis from three perspectives, we surprisingly find that the varied transferability of AEs from optimization algorithms is strongly related to the corresponding Root Mean Square Error (RMSE) from their original samples. On such a basis, one could simply approach high transferability by attacking until RMSE decreases, which motives us to propose a LArge RMSE Attack (LARA). Although LARA significantly improves transferability by 20 vulnerability of DNNs, leading to a natural urge that the strength of all attacks should be measured by both the widely used ℓ_∞ bound and the RMSE addressed in this paper, so that tricky enhancement of transferability would be avoided.

READ FULL TEXT

page 2

page 6

page 8

research
09/19/2022

On the Adversarial Transferability of ConvMixer Models

Deep neural networks (DNNs) are well known to be vulnerable to adversari...
research
04/23/2023

StyLess: Boosting the Transferability of Adversarial Examples

Adversarial attacks can mislead deep neural networks (DNNs) by adding im...
research
06/14/2023

Reliable Evaluation of Adversarial Transferability

Adversarial examples (AEs) with small adversarial perturbations can misl...
research
12/07/2020

Backpropagating Linearly Improves Transferability of Adversarial Examples

The vulnerability of deep neural networks (DNNs) to adversarial examples...
research
08/17/2023

Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks

Deep Neural Networks (DNNs) have been used to solve different day-to-day...
research
09/19/2020

Adversarial Exposure Attack on Diabetic Retinopathy Imagery

Diabetic retinopathy (DR) is a leading cause of vision loss in the world...
research
02/14/2020

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

Skip connections are an essential component of current state-of-the-art ...

Please sign up or login with your details

Forgot password? Click here to reset