Adapting Step-size: A Unified Perspective to Analyze and Improve Gradient-based Methods for Adversarial Attacks

01/27/2023
by   Wei Tao, et al.
0

Learning adversarial examples can be formulated as an optimization problem of maximizing the loss function with some box-constraints. However, for solving this induced optimization problem, the state-of-the-art gradient-based methods such as FGSM, I-FGSM and MI-FGSM look different from their original methods especially in updating the direction, which makes it difficult to understand them and then leaves some theoretical issues to be addressed in viewpoint of optimization. In this paper, from the perspective of adapting step-size, we provide a unified theoretical interpretation of these gradient-based adversarial learning methods. We show that each of these algorithms is in fact a specific reformulation of their original gradient methods but using the step-size rules with only current gradient information. Motivated by such analysis, we present a broad class of adaptive gradient-based algorithms based on the regular gradient methods, in which the step-size strategy utilizing information of the accumulated gradients is integrated. Such adaptive step-size strategies directly normalize the scale of the gradients rather than use some empirical operations. The important benefit is that convergence for the iterative algorithms is guaranteed and then the whole optimization process can be stabilized. The experiments demonstrate that our AdaI-FGM consistently outperforms I-FGSM and AdaMI-FGM remains competitive with MI-FGSM for black-box attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2019

WITCHcraft: Efficient PGD attacks with random step size

State-of-the-art adversarial attacks on neural networks use expensive it...
research
06/03/2021

Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout

Deep neural networks(DNNs) is vulnerable to be attacked by adversarial e...
research
11/03/2020

AdaDGS: An adaptive black-box optimization method with a nonlocal directional Gaussian smoothing gradient

The local gradient points to the direction of the steepest slope in an i...
research
01/17/2020

ADAMT: A Stochastic Optimization with Trend Correction Scheme

Adam-type optimizers, as a class of adaptive moment estimation methods w...
research
12/31/2020

Patch-wise++ Perturbation for Adversarial Targeted Attacks

Although great progress has been made on adversarial attacks for deep ne...
research
05/03/2022

Efficient implementation of incremental proximal-point methods

Model training algorithms which observe a small portion of the training ...
research
11/21/2021

A Data-Driven Line Search Rule for Support Recovery in High-dimensional Data Analysis

In this work, we consider the algorithm to the (nonlinear) regression pr...

Please sign up or login with your details

Forgot password? Click here to reset