Curls & Whey: Boosting Black-Box Adversarial Attacks

04/02/2019
by   Yucheng Shi, et al.
20

Image classifiers based on deep neural networks suffer from harassment caused by adversarial examples. Two defects exist in black-box iterative attacks that generate adversarial examples by incrementally adjusting the noise-adding direction for each step. On the one hand, existing iterative attacks add noises monotonically along the direction of gradient ascent, resulting in a lack of diversity and adaptability of the generated iterative trajectories. On the other hand, it is trivial to perform adversarial attack by adding excessive noises, but currently there is no refinement mechanism to squeeze redundant noises. In this work, we propose Curls & Whey black-box attack to fix the above two defects. During Curls iteration, by combining gradient ascent and descent, we `curl' up iterative trajectories to integrate more diversity and transferability into adversarial examples. Curls iteration also alleviates the diminishing marginal effect in existing iterative attacks. The Whey optimization further squeezes the `whey' of noises by exploiting the robustness of adversarial perturbation. Extensive experiments on Imagenet and Tiny-Imagenet demonstrate that our approach achieves impressive decrease on noise magnitude in l2 norm. Curls & Whey attack also shows promising transferability against ensemble models as well as adversarially trained models. In addition, we extend our attack to the targeted misclassification, effectively reducing the difficulty of targeted attacks under black-box condition.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

page 9

page 11

page 12

research
12/01/2020

Improving the Transferability of Adversarial Examples with the Adam Optimizer

Convolutional neural networks have outperformed humans in image recognit...
research
11/24/2020

Stochastic sparse adversarial attacks

Adversarial attacks of neural network classifiers (NNC) and the use of r...
research
10/17/2017

Boosting Adversarial Attacks with Momentum

Deep neural networks are vulnerable to adversarial examples, which poses...
research
04/06/2022

Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks

Deep neural networks have shown to be very vulnerable to adversarial exa...
research
12/31/2020

Patch-wise++ Perturbation for Adversarial Targeted Attacks

Although great progress has been made on adversarial attacks for deep ne...
research
06/21/2019

Adversarial Examples to Fool Iris Recognition Systems

Adversarial examples have recently proven to be able to fool deep learni...
research
04/20/2021

Staircase Sign Method for Boosting Adversarial Attacks

Crafting adversarial examples for the transfer-based attack is challengi...

Please sign up or login with your details

Forgot password? Click here to reset