Quantifying Perceptual Distortion of Adversarial Examples

02/21/2019
by   Matt Jordan, et al.
10

Recent work has shown that additive threat models, which only permit the addition of bounded noise to the pixels of an image, are insufficient for fully capturing the space of imperceivable adversarial examples. For example, small rotations and spatial transformations can fool classifiers, remain imperceivable to humans, but have large additive distance from the original images. In this work, we leverage quantitative perceptual metrics like LPIPS and SSIM to define a novel threat model for adversarial attacks. To demonstrate the value of quantifying the perceptual distortion of adversarial examples, we present and employ a unifying framework fusing different attack styles. We first prove that our framework results in images that are unattainable by attack styles in isolation. We then perform adversarial training using attacks generated by our framework to demonstrate that networks are only robust to classes of adversarial perturbations they have been trained against, and combination attacks are stronger than any of their individual components. Finally, we experimentally demonstrate that our combined attacks retain the same perceptual distortion but induce far higher misclassification rates when compared against individual attacks.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

page 8

page 14

page 15

page 16

page 17

page 18

06/22/2020

Perceptual Adversarial Robustness: Defense Against Unseen Threat Models

We present adversarial attacks and defenses for the perceptual adversari...
05/29/2019

Functional Adversarial Attacks

We propose functional adversarial attacks, a novel class of threat model...
02/15/2021

Generating Structured Adversarial Attacks Using Frank-Wolfe Method

White box adversarial perturbations are generated via iterative optimiza...
03/07/2019

Attack Type Agnostic Perceptual Enhancement of Adversarial Images

Adversarial images are samples that are intentionally modified to deceiv...
06/10/2019

E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles

It has been recently shown that the hidden variables of convolutional ne...
12/04/2019

Walking on the Edge: Fast, Low-Distortion Adversarial Examples

Adversarial examples of deep neural networks are receiving ever increasi...
05/01/2021

A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation

Most of the adversarial attack methods suffer from large perceptual dist...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.