Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks

12/15/2022
by   Nikolaos Antoniou, et al.
0

Designing powerful adversarial attacks is of paramount importance for the evaluation of ℓ_p-bounded adversarial defenses. Projected Gradient Descent (PGD) is one of the most effective and conceptually simple algorithms to generate such adversaries. The search space of PGD is dictated by the steepest ascent directions of an objective. Despite the plethora of objective function choices, there is no universally superior option and robustness overestimation may arise from ill-suited objective selection. Driven by this observation, we postulate that the combination of different objectives through a simple loss alternating scheme renders PGD more robust towards design choices. We experimentally verify this assertion on a synthetic-data example and by evaluating our proposed method across 25 different ℓ_∞-robust models and 3 datasets. The performance improvement is consistent, when compared to the single loss counterparts. In the CIFAR-10 dataset, our strongest adversarial attack outperforms all of the white-box components of AutoAttack (AA) ensemble, as well as the most powerful attacks existing on the literature, achieving state-of-the-art results in the computational budget of our study (T=100, no restarts).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2020

Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses

Advances in the development of adversarial attacks have been fundamental...
research
12/09/2021

PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function

The security of Deep Learning classifiers is a critical field of study b...
research
04/19/2021

LAFEAT: Piercing Through Adversarial Defenses with Latent Features

Deep convolutional neural networks are susceptible to adversarial attack...
research
11/15/2022

MORA: Improving Ensemble Robustness Evaluation with Model-Reweighing Attack

Adversarial attacks can deceive neural networks by adding tiny perturbat...
research
12/30/2022

Guidance Through Surrogate: Towards a Generic Diagnostic Attack

Adversarial training is an effective approach to make deep neural networ...
research
02/23/2021

Automated Discovery of Adaptive Attacks on Adversarial Defenses

Reliable evaluation of adversarial defenses is a challenging task, curre...
research
04/19/2022

Poisons that are learned faster are more effective

Imperceptible poisoning attacks on entire datasets have recently been to...

Please sign up or login with your details

Forgot password? Click here to reset