Incorporating Locality of Images to Generate Targeted Transferable Adversarial Examples

09/08/2022
by   Zhipeng Wei, et al.
0

Despite that leveraging the transferability of adversarial examples can attain a fairly high attack success rate for non-targeted attacks, it does not work well in targeted attacks since the gradient directions from a source image to a targeted class are usually different in different DNNs. To increase the transferability of target attacks, recent studies make efforts in aligning the feature of the generated adversarial example with the feature distributions of the targeted class learned from an auxiliary network or a generative adversarial network. However, these works assume that the training dataset is available and require a lot of time to train networks, which makes it hard to apply to real-world scenarios. In this paper, we revisit adversarial examples with targeted transferability from the perspective of universality and find that highly universal adversarial perturbations tend to be more transferable. Based on this observation, we propose the Locality of Images (LI) attack to improve targeted transferability. Specifically, instead of using the classification loss only, LI introduces a feature similarity loss between intermediate features from adversarial perturbed original images and randomly cropped images, which makes the features from adversarial perturbations to be more dominant than that of benign images, hence improving targeted transferability. Through incorporating locality of images into optimizing perturbations, the LI attack emphasizes that targeted perturbations should be universal to diverse input patterns, even local image patches. Extensive experiments demonstrate that LI can achieve high success rates for transfer-based targeted attacks. On attacking the ImageNet-compatible dataset, LI yields an improvement of 12% compared with existing state-of-the-art methods.

READ FULL TEXT

page 2

page 7

research
05/24/2023

Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup

Deep neural networks are widely known to be susceptible to adversarial e...
research
12/21/2020

On Success and Simplicity: A Second Look at Transferable Targeted Attacks

There is broad consensus among researchers studying adversarial examples...
research
12/06/2017

Generative Adversarial Perturbations

In this paper, we propose novel generative models for creating adversari...
research
07/13/2020

Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations

A wide variety of works have explored the reason for the existence of ad...
research
03/26/2021

On Generating Transferable Targeted Perturbations

While the untargeted black-box transferability of adversarial perturbati...
research
08/11/2022

Diverse Generative Adversarial Perturbations on Attention Space for Transferable Adversarial Attacks

Adversarial attacks with improved transferability - the ability of an ad...
research
04/29/2020

Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability

We consider the blackbox transfer-based targeted adversarial attack thre...

Please sign up or login with your details

Forgot password? Click here to reset