Accelerating Adversarial Perturbation by 50 Propagation

11/09/2022
by   Zhiqi Bu, et al.
0

Adversarial perturbation plays a significant role in the field of adversarial robustness, which solves a maximization problem over the input data. We show that the backward propagation of such optimization can accelerate 2× (and thus the overall optimization including the forward propagation can accelerate 1.5×), without any utility drop, if we only compute the output gradient but not the parameter gradient during the backward propagation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2015

Improving Back-Propagation by Adding an Adversarial Gradient

The back-propagation algorithm is widely used for learning in artificial...
research
05/02/2019

You Only Propagate Once: Accelerating Adversarial Training Using Maximal Principle

Deep learning achieves state-of-the-art results in many areas. However r...
research
04/06/2022

Accelerating Backward Aggregation in GCN Training with Execution Path Preparing on GPUs

The emerging Graph Convolutional Network (GCN) has now been widely used ...
research
05/22/2003

Back-propagation of accuracy

In this paper we solve the problem: how to determine maximal allowable e...
research
06/22/2023

Rethinking the Backward Propagation for Adversarial Transferability

Transfer-based attacks generate adversarial examples on the surrogate mo...
research
12/18/2017

Parallel Complexity of Forward and Backward Propagation

We show that the forward and backward propagation can be formulated as a...

Please sign up or login with your details

Forgot password? Click here to reset