Wasserstein Adversarial Examples via Projected Sinkhorn Iterations

02/21/2019
by   Eric Wong, et al.
8

A rapidly growing area of work has studied the existence of adversarial examples, datapoints which have been perturbed to fool a classifier, but the vast majority of these works have focused primarily on threat models defined by ℓ_p norm-bounded perturbations. In this paper, we propose a new threat model for adversarial attacks based on the Wasserstein distance. In the image classification setting, such distances measure the cost of moving pixel mass, which naturally cover "standard" image manipulations such as scaling, rotation, translation, and distortion (and can potentially be applied to other settings as well). To generate Wasserstein adversarial examples, we develop a procedure for projecting onto the Wasserstein ball, based upon a modified version of the Sinkhorn iteration. The resulting algorithm can successfully attack image classification models, bringing traditional CIFAR10 models down to 3 within a Wasserstein ball with radius 0.1 (i.e., moving 10 pixel), and we demonstrate that PGD-based adversarial training can improve this adversarial accuracy to 76 study in adversarial robustness, more formally considering convex metrics that accurately capture the invariances that we typically believe should exist in classifiers. Code for all experiments in the paper is available at https://github.com/locuslab/projected_sinkhorn.

READ FULL TEXT

page 3

page 6

page 7

page 14

research
04/26/2020

Improved Image Wasserstein Attacks and Defenses

Robustness against image perturbations bounded by a ℓ_p ball have been w...
research
10/23/2019

Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks

In the last couple of years, several adversarial attack methods based on...
research
08/06/2020

Stronger and Faster Wasserstein Adversarial Attacks

Deep models, while being extremely flexible and accurate, are surprising...
research
10/13/2021

A Framework for Verification of Wasserstein Adversarial Robustness

Machine learning image classifiers are susceptible to adversarial and co...
research
08/05/2018

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

The prediction accuracy has been the long-lasting and sole standard for ...
research
06/16/2023

Wasserstein distributional robustness of neural networks

Deep neural networks are known to be vulnerable to adversarial attacks (...
research
03/16/2020

Toward Adversarial Robustness via Semi-supervised Robust Training

Adversarial examples have been shown to be the severe threat to deep neu...

Please sign up or login with your details

Forgot password? Click here to reset