Mind the box: l_1-APGD for sparse adversarial attacks on image classifiers

03/01/2021
by   Francesco Croce, et al.
0

We show that when taking into account also the image domain [0,1]^d, established l_1-projected gradient descent (PGD) attacks are suboptimal as they do not consider that the effective threat model is the intersection of the l_1-ball and [0,1]^d. We study the expected sparsity of the steepest descent step for this effective threat model and show that the exact projection onto this set is computationally feasible and yields better performance. Moreover, we propose an adaptive form of PGD which is highly effective even with a small budget of iterations. Our resulting l_1-APGD is a strong white box attack showing that prior work overestimated their l_1-robustness. Using l_1-APGD for adversarial training we get a robust classifier with SOTA l_1-robustness. Finally, we combine l_1-APGD and an adaptation of the Square Attack to l_1 into l_1-AutoAttack, an ensemble of attacks which reliably assesses adversarial robustness for the threat model of l_1-ball intersected with [0,1]^d.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/11/2019

Sparse and Imperceivable Adversarial Attacks

Neural networks have been proven to be vulnerable to a variety of advers...
research
05/29/2019

Functional Adversarial Attacks

We propose functional adversarial attacks, a novel class of threat model...
research
07/16/2022

CARBEN: Composite Adversarial Robustness Benchmark

Prior literature on adversarial attack methods has mainly focused on att...
research
09/23/2021

Adversarial Transfer Attacks With Unknown Data and Class Overlap

The ability to transfer adversarial attacks from one model (the surrogat...
research
06/16/2023

Wasserstein distributional robustness of neural networks

Deep neural networks are known to be vulnerable to adversarial attacks (...
research
06/12/2023

How robust accuracy suffers from certified training with convex relaxations

Adversarial attacks pose significant threats to deploying state-of-the-a...
research
02/01/2019

The Efficacy of SHIELD under Different Threat Models

We study the efficacy of SHIELD in the face of alternative threat models...

Please sign up or login with your details

Forgot password? Click here to reset