PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack

06/03/2021
by   Alexander Matyasko, et al.
0

State-of-the-art deep neural networks are sensitive to small input perturbations. Since the discovery of this intriguing vulnerability, many defence methods have been proposed that attempt to improve robustness to adversarial noise. Fast and accurate attacks are required to compare various defence methods. However, evaluating adversarial robustness has proven to be extremely challenging. Existing norm minimisation adversarial attacks require thousands of iterations (e.g. Carlini Wagner attack), are limited to the specific norms (e.g. Fast Adaptive Boundary), or produce sub-optimal results (e.g. Brendel Bethge attack). On the other hand, PGD attack, which is fast, general and accurate, ignores the norm minimisation penalty and solves a simpler perturbation-constrained problem. In this work, we introduce a fast, general and accurate adversarial attack that optimises the original non-convex constrained minimisation problem. We interpret optimising the Lagrangian of the adversarial attack optimisation problem as a two-player game: the first player minimises the Lagrangian wrt the adversarial noise; the second player maximises the Lagrangian wrt the regularisation penalty. Our attack algorithm simultaneously optimises primal and dual variables to find the minimal adversarial perturbation. In addition, for non-smooth l_p-norm minimisation, such as l_∞-, l_1-, and l_0-norms, we introduce primal-dual proximal gradient descent attack. We show in the experiments that our attack outperforms current state-of-the-art l_∞-, l_2-, l_1-, and l_0-attacks on MNIST, CIFAR-10 and Restricted ImageNet datasets against unregularised and adversarially trained models.

READ FULL TEXT

page 1

page 8

research
03/25/2019

The LogBarrier adversarial attack: making effective use of decision boundary information

Adversarial attacks for image classification are small perturbations to ...
research
11/23/2018

Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses

Research on adversarial examples in computer vision tasks has shown that...
research
11/05/2020

Sampled Nonlocal Gradients for Stronger Adversarial Attacks

The vulnerability of deep neural networks to small and even imperceptibl...
research
06/14/2022

Proximal Splitting Adversarial Attacks for Semantic Segmentation

Classification has been the focal point of research on adversarial attac...
research
11/10/2021

Sparse Adversarial Video Attacks with Spatial Transformations

In recent years, a significant amount of research efforts concentrated o...
research
01/31/2023

Robust Linear Regression: Gradient-descent, Early-stopping, and Beyond

In this work we study the robustness to adversarial attacks, of early-st...
research
07/03/2019

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

The evaluation of robustness against adversarial manipulation of neural ...

Please sign up or login with your details

Forgot password? Click here to reset