On the Suitability of L_p-norms for Creating and Preventing Adversarial Examples

02/27/2018
by   Mahmood Sharif, et al.
0

Much research effort has been devoted to better understanding adversarial examples, which are specially crafted inputs to machine-learning models that are perceptually similar to benign inputs, but are classified differently (i.e., misclassified). Both algorithms that create adversarial examples and strategies for defending against them typically use L_p-norms to measure the perceptual similarity between an adversarial input and its benign original. Prior work has already shown, however, that two images need not be close to each other as measured by an L_p-norm to be perceptually similar. In this work, we show that nearness according to an L_p-norm is not just unnecessary for perceptual similarity, but is also insufficient. Specifically, focusing on datasets (CIFAR10 and MNIST), L_p-norms, and thresholds used in prior work, we show through 299-participant online user studies that "adversarial examples" that are closer to their benign counterparts than required by commonly used L_p-norm thresholds can nevertheless be perceptually different to humans from the corresponding benign examples. Namely, the perceptual distance between two images that are "near" each other according to an L_p-norm can be high enough that participants frequently classify the two images as representing different objects or digits. Combined with prior work, we thus demonstrate that nearness of inputs as measured by L_p-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples.

READ FULL TEXT

page 4

page 5

research
02/14/2021

Perceptually Constrained Adversarial Attacks

Motivated by previous observations that the usually applied L_p norms (p...
research
06/01/2019

Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification

Deep neural networks (DNNs) have recently achieved state-of-the-art perf...
research
01/23/2020

On the human evaluation of audio adversarial examples

Human-machine interaction is increasingly dependent on speech communicat...
research
01/29/2020

Semantic Adversarial Perturbations using Learnt Representations

Adversarial examples for image classifiers are typically created by sear...
research
09/08/2018

Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples

Adversarial examples are perturbed inputs designed to fool machine learn...
research
01/05/2020

The Human Visual System and Adversarial AI

This paper introduces existing research about the Human Visual System in...
research
01/21/2019

Perception-in-the-Loop Adversarial Examples

We present a scalable, black box, perception-in-the-loop technique to fi...

Please sign up or login with your details

Forgot password? Click here to reset