Boosting Adversarial Transferability with Learnable Patch-wise Masks

06/28/2023
by   Xingxing Wei, et al.
0

Adversarial examples have raised widespread attention in security-critical applications because of their transferability across different models. Although many methods have been proposed to boost adversarial transferability, a gap still exists in the practical demand. In this paper, we argue that the model-specific discriminative regions are a key factor to cause the over-fitting to the source model, and thus reduce the transferability to the target model. For that, a patch-wise mask is utilized to prune the model-specific regions when calculating adversarial perturbations. To accurately localize these regions, we present a learnable approach to optimize the mask automatically. Specifically, we simulate the target models in our framework, and adjust the patch-wise mask according to the feedback of simulated models. To improve the efficiency, Differential Evolutionary (DE) algorithm is utilized to search for patch-wise masks for a specific image. During iterative attacks, the learned masks are applied to the image to drop out the patches related to model-specific regions, thus making the gradients more generic and improving the adversarial transferability. The proposed approach is a pre-processing method and can be integrated with existing gradient-based methods to further boost the transfer attack success rate. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our method. We incorporate the proposed approach with existing methods in the ensemble attacks and achieve an average success rate of 93.01 advanced defense methods, which can effectively enhance the state-of-the-art transfer-based attack performance.

READ FULL TEXT

page 2

page 3

page 5

research
12/31/2020

Patch-wise++ Perturbation for Adversarial Targeted Attacks

Although great progress has been made on adversarial attacks for deep ne...
research
07/14/2020

Patch-wise Attack for Fooling Deep Neural Network

By adding human-imperceptible noise to clean images, the resultant adver...
research
03/19/2021

Boosting Adversarial Transferability through Enhanced Momentum

Deep learning models are known to be vulnerable to adversarial examples ...
research
09/09/2021

Towards Transferable Adversarial Attacks on Vision Transformers

Vision transformers (ViTs) have demonstrated impressive performance on a...
research
03/10/2023

Boosting Adversarial Attacks by Leveraging Decision Boundary Information

Due to the gap between a substitute model and a victim model, the gradie...
research
09/17/2020

Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks

We present Vax-a-Net; a technique for immunizing convolutional neural ne...
research
03/28/2023

Improving the Transferability of Adversarial Samples by Path-Augmented Method

Deep neural networks have achieved unprecedented success on diverse visi...

Please sign up or login with your details

Forgot password? Click here to reset