Quantifying Perceptual Distortion of Adversarial Examples

by   Matt Jordan, et al.

Recent work has shown that additive threat models, which only permit the addition of bounded noise to the pixels of an image, are insufficient for fully capturing the space of imperceivable adversarial examples. For example, small rotations and spatial transformations can fool classifiers, remain imperceivable to humans, but have large additive distance from the original images. In this work, we leverage quantitative perceptual metrics like LPIPS and SSIM to define a novel threat model for adversarial attacks. To demonstrate the value of quantifying the perceptual distortion of adversarial examples, we present and employ a unifying framework fusing different attack styles. We first prove that our framework results in images that are unattainable by attack styles in isolation. We then perform adversarial training using attacks generated by our framework to demonstrate that networks are only robust to classes of adversarial perturbations they have been trained against, and combination attacks are stronger than any of their individual components. Finally, we experimentally demonstrate that our combined attacks retain the same perceptual distortion but induce far higher misclassification rates when compared against individual attacks.



There are no comments yet.


page 5

page 7

page 8

page 14

page 15

page 16

page 17

page 18


Perceptual Adversarial Robustness: Defense Against Unseen Threat Models

We present adversarial attacks and defenses for the perceptual adversari...

Functional Adversarial Attacks

We propose functional adversarial attacks, a novel class of threat model...

Generating Structured Adversarial Attacks Using Frank-Wolfe Method

White box adversarial perturbations are generated via iterative optimiza...

Attack Type Agnostic Perceptual Enhancement of Adversarial Images

Adversarial images are samples that are intentionally modified to deceiv...

E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles

It has been recently shown that the hidden variables of convolutional ne...

Walking on the Edge: Fast, Low-Distortion Adversarial Examples

Adversarial examples of deep neural networks are receiving ever increasi...

A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation

Most of the adversarial attack methods suffer from large perceptual dist...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.