Should Adversarial Attacks Use Pixel p-Norm?

06/06/2019
by   Ayon Sen, et al.
0

Adversarial attacks aim to confound machine learning systems, while remaining virtually imperceptible to humans. Attacks on image classification systems are typically gauged in terms of p-norm distortions in the pixel feature space. We perform a behavioral study, demonstrating that the pixel p-norm for any 0< p <∞, and several alternative measures including earth mover's distance, structural similarity index, and deep net embedding, do not fit human perception. Our result has the potential to improve the understanding of adversarial attack and defense strategies.

READ FULL TEXT

page 4

page 5

page 12

page 16

page 17

page 18

research
04/10/2023

Generating Adversarial Attacks in the Latent Space

Adversarial attacks in the input (pixel) space typically incorporate noi...
research
04/29/2022

Adversarial attacks on an optical neural network

Adversarial attacks have been extensively investigated for machine learn...
research
02/19/2021

Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks

Production machine learning systems are consistently under attack by adv...
research
08/08/2018

Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer

Many machine learning image classifiers are vulnerable to adversarial at...
research
04/26/2020

Towards Feature Space Adversarial Attack

We propose a new type of adversarial attack to Deep Neural Networks (DNN...
research
12/21/2020

Blurring Fools the Network – Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring

Existing pixel-level adversarial attacks on neural networks may be defic...
research
11/21/2020

Spatially Correlated Patterns in Adversarial Images

Adversarial attacks have proved to be the major impediment in the progre...

Please sign up or login with your details

Forgot password? Click here to reset