Feature Importance-aware Transferable Adversarial Attacks

07/29/2021
by   Zhibo Wang, et al.
0

Transferability of adversarial examples is of central importance for attacking an unknown model, which facilitates adversarial attacks in more practical scenarios, e.g., blackbox attacks. Existing transferable attacks tend to craft adversarial examples by indiscriminately distorting features to degrade prediction accuracy in a source model without aware of intrinsic features of objects in the images. We argue that such brute-force degradation would introduce model-specific local optimum into adversarial examples, thus limiting the transferability. By contrast, we propose the Feature Importance-aware Attack (FIA), which disrupts important object-aware features that dominate model decisions consistently. More specifically, we obtain feature importance by introducing the aggregate gradient, which averages the gradients with respect to feature maps of the source model, computed on a batch of random transforms of the original clean image. The gradients will be highly correlated to objects of interest, and such correlation presents invariance across different models. Besides, the random transforms will preserve intrinsic features of objects and suppress model-specific information. Finally, the feature importance guides to search for adversarial examples towards disrupting critical features, achieving stronger transferability. Extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed FIA, i.e., improving the success rate by 8.4 models and 11.7 transferable attacks. Code is available at: https://github.com/hcguoO0/FIA

READ FULL TEXT

page 1

page 3

research
02/28/2022

Enhance transferability of adversarial examples with model architecture

Transferability of adversarial examples is of critical importance to lau...
research
05/24/2023

Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup

Deep neural networks are widely known to be susceptible to adversarial e...
research
04/05/2023

How to choose your best allies for a transferable attack?

The transferability of adversarial examples is a key issue in the securi...
research
03/17/2022

Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input

The transferability of adversarial examples allows the deception on blac...
research
04/26/2022

Boosting Adversarial Transferability of MLP-Mixer

The security of models based on new architectures such as MLP-Mixer and ...
research
06/08/2023

Boosting Adversarial Transferability by Achieving Flat Local Maxima

Transfer-based attack adopts the adversarial examples generated on the s...
research
10/25/2019

Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples

We investigate if the random feature selection approach proposed in [1] ...

Please sign up or login with your details

Forgot password? Click here to reset