On Success and Simplicity: A Second Look at Transferable Targeted Attacks

12/21/2020
by   Zhengyu Zhao, et al.
6

There is broad consensus among researchers studying adversarial examples that it is extremely difficult to achieve transferability of targeted attacks. Currently, existing research strives for transferability of targeted attacks by resorting to sophisticated losses and even massive training. In this paper, we take a second look at the transferability of targeted attacks and show that their difficulty has been overestimated due to a blind spot in the conventional evaluation procedures. Specifically, current work has unreasonably restricted attack optimization to a few iterations. Here, we show that targeted attacks converge slowly to optimal transferability and improve considerably when given more iterations. We also demonstrate that an attack that simply maximizes the target logit performs surprisingly well, remarkably surpassing more complex losses and even achieving performance comparable to the state of the art, which requires massive training with sophisticated loss. We provide further validation of our logit attack in a realistic ensemble setting and in a real-world attack against the Google Cloud Vision. The logit attack produces perturbations that reflect the target semantics, which we demonstrate allows us to create targeted universal adversarial perturbations without additional training images.

READ FULL TEXT

page 4

page 5

page 7

page 11

page 12

research
09/08/2022

Incorporating Locality of Images to Generate Targeted Transferable Adversarial Examples

Despite that leveraging the transferability of adversarial examples can ...
research
06/03/2022

Evaluating Transfer-based Targeted Adversarial Perturbations against Real-World Computer Vision Systems based on Human Judgments

Computer vision systems are remarkably vulnerable to adversarial perturb...
research
08/21/2023

Enhancing Adversarial Attacks: The Similar Target Method

Deep neural networks are vulnerable to adversarial examples, posing a th...
research
09/09/2021

Towards Transferable Adversarial Attacks on Vision Transformers

Vision transformers (ViTs) have demonstrated impressive performance on a...
research
05/13/2020

Adversarial examples are useful too!

Deep learning has come a long way and has enjoyed an unprecedented succe...
research
12/16/2019

CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator

Deep neural networks (DNNs) are vulnerable to adversarial attack despite...
research
08/14/2020

WAN: Watermarking Attack Network

Multi-bit watermarking (MW) has been developed to improve robustness aga...

Please sign up or login with your details

Forgot password? Click here to reset