Stronger and Faster Wasserstein Adversarial Attacks

08/06/2020
by   Kaiwen Wu, et al.
22

Deep models, while being extremely flexible and accurate, are surprisingly vulnerable to "small, imperceptible" perturbations known as adversarial attacks. While the majority of existing attacks focus on measuring perturbations under the ℓ_p metric, Wasserstein distance, which takes geometry in pixel space into account, has long been known to be a suitable metric for measuring image quality and has recently risen as a compelling alternative to the ℓ_p metric in adversarial attacks. However, constructing an effective attack under the Wasserstein metric is computationally much more challenging and calls for better optimization algorithms. We address this gap in two ways: (a) we develop an exact yet efficient projection operator to enable a stronger projected gradient attack; (b) we show that the Frank-Wolfe method equipped with a suitable linear minimization oracle works extremely fast under Wasserstein constraints. Our algorithms not only converge faster but also generate much stronger attacks. For instance, we decrease the accuracy of a residual network on CIFAR-10 to 3.4% within a Wasserstein perturbation ball of radius 0.005, in contrast to 65.6% using the previous Wasserstein attack based on an approximate projection operator. Furthermore, employing our stronger attacks in adversarial training significantly improves the robustness of adversarially trained models.

READ FULL TEXT

page 1

page 9

page 23

page 24

research
10/23/2019

Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks

In the last couple of years, several adversarial attack methods based on...
research
02/21/2019

Wasserstein Adversarial Examples via Projected Sinkhorn Iterations

A rapidly growing area of work has studied the existence of adversarial ...
research
04/26/2020

Improved Image Wasserstein Attacks and Defenses

Robustness against image perturbations bounded by a ℓ_p ball have been w...
research
03/22/2023

Wasserstein Adversarial Examples on Univariant Time Series Data

Adversarial examples are crafted by adding indistinguishable perturbatio...
research
10/13/2021

A Framework for Verification of Wasserstein Adversarial Robustness

Machine learning image classifiers are susceptible to adversarial and co...
research
06/16/2023

Wasserstein distributional robustness of neural networks

Deep neural networks are known to be vulnerable to adversarial attacks (...

Please sign up or login with your details

Forgot password? Click here to reset