Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks

04/06/2022
by   Xu Han, et al.
0

Deep neural networks have shown to be very vulnerable to adversarial examples crafted by adding human-imperceptible perturbations to benign inputs. After achieving impressive attack success rates in the white-box setting, more focus is shifted to black-box attacks. In either case, the common gradient-based approaches generally use the sign function to generate perturbations at the end of the process. However, only a few works pay attention to the limitation of the sign function. Deviation between the original gradient and the generated noises may lead to inaccurate gradient update estimation and suboptimal solutions for adversarial transferability, which is crucial for black-box attacks. To address this issue, we propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM) to improve the transferability of the crafted adversarial examples. Specifically, we use data rescaling to substitute the inefficient sign function in gradient-based attacks without extra computational cost. We also propose a Depth First Sampling method to eliminate the fluctuation of rescaling and stabilize the gradient update. Our method can be used in any gradient-based optimizations and is extensible to be integrated with various input transformation or ensemble methods for further improving the adversarial transferability. Extensive experiments on the standard ImageNet dataset show that our S-FGRM could significantly boost the transferability of gradient-based attacks and outperform the state-of-the-art baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2021

Improving Adversarial Transferability with Gradient Refining

Deep neural networks are vulnerable to adversarial examples, which are c...
research
04/20/2021

Staircase Sign Method for Boosting Adversarial Attacks

Crafting adversarial examples for the transfer-based attack is challengi...
research
10/25/2021

Fast Gradient Non-sign Methods

Adversarial attacks make their success in fooling DNNs and among them, g...
research
11/27/2021

Adaptive Perturbation for Adversarial Attack

In recent years, the security of deep learning models achieves more and ...
research
09/13/2021

Improving Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator

Adversarial examples can deceive a deep neural network (DNN) by signific...
research
03/08/2023

Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples

The vulnerability of Deep Neural Networks (DNNs) to adversarial examples...
research
04/02/2019

Curls & Whey: Boosting Black-Box Adversarial Attacks

Image classifiers based on deep neural networks suffer from harassment c...

Please sign up or login with your details

Forgot password? Click here to reset