Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm

06/10/2021
by   Mingkang Zhu, et al.
0

Sparse adversarial attacks can fool deep neural networks (DNNs) by only perturbing a few pixels (regularized by l_0 norm). Recent efforts combine it with another l_infty imperceptible on the perturbation magnitudes. The resultant sparse and imperceptible attacks are practically relevant, and indicate an even higher vulnerability of DNNs that we usually imagined. However, such attacks are more challenging to generate due to the optimization difficulty by coupling the l_0 regularizer and box constraints with a non-convex objective. In this paper, we address this challenge by proposing a homotopy algorithm, to jointly tackle the sparsity and the perturbation bound in one unified framework. Each iteration, the main step of our algorithm is to optimize an l_0-regularized adversarial loss, by leveraging the nonmonotone Accelerated Proximal Gradient Method (nmAPG) for nonconvex programming; it is followed by an l_0 change control step, and an optional post-attack step designed to escape bad local minima. We also extend the algorithm to handling the structural sparsity regularizer. We extensively examine the effectiveness of our proposed homotopy attack for both targeted and non-targeted attack scenarios, on CIFAR-10 and ImageNet datasets. Compared to state-of-the-art methods, our homotopy attack leads to significantly fewer perturbations, e.g., reducing 42.91 attack), at similar maximal perturbation magnitudes, when still achieving 100 attack success rates. Our codes are available at: https://github.com/VITA-Group/SparseADV_Homotopy.

READ FULL TEXT

page 1

page 9

research
05/31/2021

Transferable Sparse Adversarial Attack

Deep neural networks have shown their vulnerability to adversarial attac...
research
08/04/2022

A New Kind of Adversarial Example

Almost all adversarial attacks are formulated to add an imperceptible pe...
research
10/26/2020

GreedyFool: Distortion-Aware Sparse Adversarial Attack

Modern deep neural networks(DNNs) are vulnerable to adversarial samples....
research
08/05/2018

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

When generating adversarial examples to attack deep neural networks (DNN...
research
03/09/2022

Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation

In recent years, the adversarial vulnerability of deep neural networks (...
research
03/18/2022

AutoAdversary: A Pixel Pruning Method for Sparse Adversarial Attack

Deep neural networks (DNNs) have been proven to be vulnerable to adversa...
research
11/10/2021

Sparse Adversarial Video Attacks with Spatial Transformations

In recent years, a significant amount of research efforts concentrated o...

Please sign up or login with your details

Forgot password? Click here to reset