Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints

02/25/2021
by   Maura Pintor, et al.
14

Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified. The inherent complexity of the underlying optimization requires current gradient-based attacks to be carefully tuned, initialized, and possibly executed for many computationally-demanding iterations, even if specialized to a given perturbation model. In this work, we overcome these limitations by proposing a fast minimum-norm (FMN) attack that works with different ℓ_p-norm perturbation models (p=0, 1, 2, ∞), is robust to hyperparameter choices, does not require adversarial starting points, and converges within few lightweight steps. It works by iteratively finding the sample misclassified with maximum confidence within an ℓ_p-norm constraint of size ϵ, while adapting ϵ to minimize the distance of the current sample to the decision boundary. Extensive experiments show that FMN significantly outperforms existing attacks in terms of convergence speed and computation time, while reporting comparable or even smaller perturbation sizes.

READ FULL TEXT

page 11

page 12

page 13

research
07/03/2019

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

The evaluation of robustness against adversarial manipulation of neural ...
research
03/25/2019

The LogBarrier adversarial attack: making effective use of decision boundary information

Adversarial attacks for image classification are small perturbations to ...
research
11/23/2018

Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses

Research on adversarial examples in computer vision tasks has shown that...
research
07/20/2022

Illusionary Attacks on Sequential Decision Makers and Countermeasures

Autonomous intelligent agents deployed to the real-world need to be robu...
research
10/01/2019

An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack

There are two major paradigms of white-box adversarial attacks that atte...
research
07/15/2020

Fast Differentiable Clipping-Aware Normalization and Rescaling

Rescaling a vector δ⃗∈ℝ^n to a desired length is a common operation in m...
research
06/03/2019

Fast and Stable Interval Bounds Propagation for Training Verifiably Robust Models

We present an efficient technique, which allows to train classification ...

Please sign up or login with your details

Forgot password? Click here to reset