Efficient Project Gradient Descent for Ensemble Adversarial Attack

06/07/2019
by   Fanyou Wu, et al.
0

Recent advances show that deep neural networks are not robust to deliberately crafted adversarial examples which many are generated by adding human imperceptible perturbation to clear input. Consider l_2 norms attacks, Project Gradient Descent (PGD) and the Carlini and Wagner (C&W) attacks are the two main methods, where PGD control max perturbation for adversarial examples while C&W approach treats perturbation as a regularization term optimized it with loss function together. If we carefully set parameters for any individual input, both methods become similar. In general, PGD attacks perform faster but obtains larger perturbation to find adversarial examples than the C&W when fixing the parameters for all inputs. In this report, we propose an efficient modified PGD method for attacking ensemble models by automatically changing ensemble weights and step size per iteration per input. This method generates smaller perturbation adversarial examples than PGD method while remains efficient as compared to C&W method. Our method won the first place in IJCAI19 Targeted Adversarial Attack competition.

READ FULL TEXT
research
08/04/2022

A New Kind of Adversarial Example

Almost all adversarial attacks are formulated to add an imperceptible pe...
research
07/03/2019

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

The evaluation of robustness against adversarial manipulation of neural ...
research
09/03/2021

A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

In this work, we show how to jointly exploit adversarial perturbation an...
research
03/26/2018

Clipping free attacks against artificial neural networks

During the last years, a remarkable breakthrough has been made in AI dom...
research
07/21/2022

Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness

Defending deep neural networks against adversarial examples is a key cha...
research
10/28/2020

Most ReLU Networks Suffer from ℓ^2 Adversarial Perturbations

We consider ReLU networks with random weights, in which the dimension de...
research
06/21/2019

Adversarial Examples to Fool Iris Recognition Systems

Adversarial examples have recently proven to be able to fool deep learni...

Please sign up or login with your details

Forgot password? Click here to reset