Quantifying Perceptual Distortion of Adversarial Examples

02/21/2019
by   Matt Jordan, et al.
10

Recent work has shown that additive threat models, which only permit the addition of bounded noise to the pixels of an image, are insufficient for fully capturing the space of imperceivable adversarial examples. For example, small rotations and spatial transformations can fool classifiers, remain imperceivable to humans, but have large additive distance from the original images. In this work, we leverage quantitative perceptual metrics like LPIPS and SSIM to define a novel threat model for adversarial attacks. To demonstrate the value of quantifying the perceptual distortion of adversarial examples, we present and employ a unifying framework fusing different attack styles. We first prove that our framework results in images that are unattainable by attack styles in isolation. We then perform adversarial training using attacks generated by our framework to demonstrate that networks are only robust to classes of adversarial perturbations they have been trained against, and combination attacks are stronger than any of their individual components. Finally, we experimentally demonstrate that our combined attacks retain the same perceptual distortion but induce far higher misclassification rates when compared against individual attacks.

READ FULL TEXT

page 5

page 7

page 8

page 14

page 15

page 16

page 17

page 18

research
06/22/2020

Perceptual Adversarial Robustness: Defense Against Unseen Threat Models

We present adversarial attacks and defenses for the perceptual adversari...
research
05/15/2023

Attacking Perceptual Similarity Metrics

Perceptual similarity metrics have progressively become more correlated ...
research
05/29/2019

Functional Adversarial Attacks

We propose functional adversarial attacks, a novel class of threat model...
research
02/15/2021

Generating Structured Adversarial Attacks Using Frank-Wolfe Method

White box adversarial perturbations are generated via iterative optimiza...
research
05/01/2021

A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation

Most of the adversarial attack methods suffer from large perceptual dist...
research
12/04/2019

Walking on the Edge: Fast, Low-Distortion Adversarial Examples

Adversarial examples of deep neural networks are receiving ever increasi...
research
03/07/2019

Attack Type Agnostic Perceptual Enhancement of Adversarial Images

Adversarial images are samples that are intentionally modified to deceiv...

Please sign up or login with your details

Forgot password? Click here to reset