Boosting Adversarial Transferability by Block Shuffle and Rotation

08/20/2023
by   Kunyu Wang, et al.
0

Adversarial examples mislead deep neural networks with imperceptible perturbations and have brought significant threats to deep learning. An important aspect is their transferability, which refers to their ability to deceive other models, thus enabling attacks in the black-box setting. Though various methods have been proposed to boost transferability, the performance still falls short compared with white-box attacks. In this work, we observe that existing input transformation based attacks, one of the mainstream transfer-based attacks, result in different attention heatmaps on various models, which might limit the transferability. We also find that breaking the intrinsic relation of the image can disrupt the attention heatmap of the original image. Based on this finding, we propose a novel input transformation based attack called block shuffle and rotation (BSR). Specifically, BSR splits the input image into several blocks, then randomly shuffles and rotates these blocks to construct a set of new images for gradient calculation. Empirical evaluations on the ImageNet dataset demonstrate that BSR could achieve significantly better transferability than the existing input transformation based methods under single-model and ensemble-model settings. Combining BSR with the current input transformation method can further improve the transferability, which significantly outperforms the state-of-the-art methods.

READ FULL TEXT

page 1

page 3

page 11

research
01/31/2021

Admix: Enhancing the Transferability of Adversarial Attacks

Although adversarial attacks have achieved incredible attack success rat...
research
04/20/2023

Diversifying the High-level Features for better Adversarial Transferability

Given the great threat of adversarial attacks against Deep Neural Networ...
research
08/21/2023

Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer

Deep neural networks are vulnerable to adversarial examples crafted by a...
research
11/27/2021

Adaptive Image Transformations for Transfer-based Adversarial Attack

Adversarial attacks provide a good way to study the robustness of deep l...
research
08/25/2022

A Perturbation Resistant Transformation and Classification System for Deep Neural Networks

Deep convolutional neural networks accurately classify a diverse range o...
research
08/16/2021

Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy

The transferability and robustness of adversarial examples are two pract...
research
08/27/2022

SA: Sliding attack for synthetic speech detection with resistance to clipping and self-splicing

Deep neural networks are vulnerable to adversarial examples that mislead...

Please sign up or login with your details

Forgot password? Click here to reset