Fast Gradient Non-sign Methods

10/25/2021
by   Yaya Cheng, et al.
0

Adversarial attacks make their success in fooling DNNs and among them, gradient-based algorithms become one of the mainstreams. Based on the linearity hypothesis <cit.>, under ℓ_∞ constraint, sign operation applied to the gradients is a good choice for generating perturbations. However, the side-effect from such operation exists since it leads to the bias of direction between the real gradients and the perturbations. In other words, current methods contain a gap between real gradients and actual noises, which leads to biased and inefficient attacks. Therefore in this paper, based on the Taylor expansion, the bias is analyzed theoretically and the correction of , , Fast Gradient Non-sign Method (FGNM), is further proposed. Notably, FGNM is a general routine, which can seamlessly replace the conventional sign operation in gradient-based attacks with negligible extra computational cost. Extensive experiments demonstrate the effectiveness of our methods. Specifically, ours outperform them by 27.5% at most and 9.5% on average. Our anonymous code is publicly available: <https://git.io/mm-fgnm>.

READ FULL TEXT

page 1

page 8

research
04/06/2022

Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks

Deep neural networks have shown to be very vulnerable to adversarial exa...
research
10/25/2019

MediaEval 2019: Concealed FGSM Perturbations for Privacy Preservation

This work tackles the Pixel Privacy task put forth by MediaEval 2019. Ou...
research
04/20/2021

Staircase Sign Method for Boosting Adversarial Attacks

Crafting adversarial examples for the transfer-based attack is challengi...
research
02/01/2020

Towards Sharper First-Order Adversary with Quantized Gradients

Despite the huge success of Deep Neural Networks (DNNs) in a wide spectr...
research
11/08/2021

Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks

In this paper, the bias classifier is introduced, that is, the bias part...
research
07/21/2019

signADAM: Learning Confidences for Deep Neural Networks

In this paper, we propose a new first-order gradient-based algorithm to ...
research
08/01/2022

The Effect of Omitted Variables on the Sign of Regression Coefficients

Omitted variables are a common concern in empirical research. We show th...

Please sign up or login with your details

Forgot password? Click here to reset