Improving Adversarial Transferability with Scheduled Step Size and Dual Example

01/30/2023
by   Zeliang Zhang, et al.
0

Deep neural networks are widely known to be vulnerable to adversarial examples, especially showing significantly poor performance on adversarial examples generated under the white-box setting. However, most white-box attack methods rely heavily on the target model and quickly get stuck in local optima, resulting in poor adversarial transferability. The momentum-based methods and their variants are proposed to escape the local optima for better transferability. In this work, we notice that the transferability of adversarial examples generated by the iterative fast gradient sign method (I-FGSM) exhibits a decreasing trend when increasing the number of iterations. Motivated by this finding, we argue that the information of adversarial perturbations near the benign sample, especially the direction, benefits more on the transferability. Thus, we propose a novel strategy, which uses the Scheduled step size and the Dual example (SD), to fully utilize the adversarial information near the benign sample. Our proposed strategy can be easily integrated with existing adversarial attack methods for better adversarial transferability. Empirical evaluations on the standard ImageNet dataset demonstrate that our proposed method can significantly enhance the transferability of existing adversarial attacks.

READ FULL TEXT

page 4

page 6

page 7

page 8

research
08/17/2019

Nesterov Accelerated Gradient and Scale Invariance for Improving Transferability of Adversarial Examples

Recent evidence suggests that deep neural networks (DNNs) are vulnerable...
research
03/19/2021

Boosting Adversarial Transferability through Enhanced Momentum

Deep learning models are known to be vulnerable to adversarial examples ...
research
03/29/2021

Enhancing the Transferability of Adversarial Attacks through Variance Tuning

Deep neural networks are vulnerable to adversarial examples that mislead...
research
01/01/2022

Adversarial Attack via Dual-Stage Network Erosion

Deep neural networks are vulnerable to adversarial examples, which can f...
research
12/31/2020

Patch-wise++ Perturbation for Adversarial Targeted Attacks

Although great progress has been made on adversarial attacks for deep ne...
research
03/25/2022

Improving Adversarial Transferability with Spatial Momentum

Deep Neural Networks (DNN) are vulnerable to adversarial examples. Altho...
research
06/08/2023

Boosting Adversarial Transferability by Achieving Flat Local Maxima

Transfer-based attack adopts the adversarial examples generated on the s...

Please sign up or login with your details

Forgot password? Click here to reset