Image classifiers can not be made robust to small perturbations

12/07/2021
by   Zheng Dai, et al.
0

The sensitivity of image classifiers to small perturbations in the input is often viewed as a defect of their construction. We demonstrate that this sensitivity is a fundamental property of classifiers. For any arbitrary classifier over the set of n-by-n images, we show that for all but one class it is possible to change the classification of all but a tiny fraction of the images in that class with a tiny modification compared to the diameter of the image space when measured in any p-norm, including the hamming distance. We then examine how this phenomenon manifests in human visual perception and discuss its implications for the design considerations of computer vision systems.

READ FULL TEXT

page 7

page 23

research
08/14/2023

Robustified ANNs Reveal Wormholes Between Human Category Percepts

The visual object category reports of artificial neural networks (ANNs) ...
research
10/03/2019

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

Deep neural network image classifiers are reported to be susceptible to ...
research
02/11/2020

Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations

Adversarial examples are malicious inputs crafted to induce misclassific...
research
08/28/2020

Color and Edge-Aware Adversarial Image Perturbations

Adversarial perturbation of images, in which a source image is deliberat...
research
08/20/2020

β-Variational Classifiers Under Attack

Deep Neural networks have gained lots of attention in recent years thank...
research
12/21/2014

Principal Sensitivity Analysis

We present a novel algorithm (Principal Sensitivity Analysis; PSA) to an...
research
01/27/2019

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

We present a simple hypothesis about a compression property of artificia...

Please sign up or login with your details

Forgot password? Click here to reset