Robustified ANNs Reveal Wormholes Between Human Category Percepts

08/14/2023
by   Guy Gaziv, et al.
0

The visual object category reports of artificial neural networks (ANNs) are notoriously sensitive to tiny, adversarial image perturbations. Because human category reports (aka human percepts) are thought to be insensitive to those same small-norm perturbations – and locally stable in general – this argues that ANNs are incomplete scientific models of human visual perception. Consistent with this, we show that when small-norm image perturbations are generated by standard ANN models, human object category percepts are indeed highly stable. However, in this very same "human-presumed-stable" regime, we find that robustified ANNs reliably discover low-norm image perturbations that strongly disrupt human percepts. These previously undetectable human perceptual disruptions are massive in amplitude, approaching the same level of sensitivity seen in robustified ANNs. Further, we show that robustified ANNs support precise perceptual state interventions: they guide the construction of low-norm image perturbations that strongly alter human category percepts toward specific prescribed percepts. These observations suggest that for arbitrary starting points in image space, there exists a set of nearby "wormholes", each leading the subject from their current category perceptual state into a semantically very different state. Moreover, contemporary ANN models of biological visual processing are now accurate enough to consistently guide us to those portals.

READ FULL TEXT

page 2

page 4

page 6

page 7

research
10/28/2019

PerceptNet: A Human Visual System Inspired Neural Network for Estimating Perceptual Distance

Traditionally, the vision community has devised algorithms to estimate t...
research
12/07/2021

Image classifiers can not be made robust to small perturbations

The sensitivity of image classifiers to small perturbations in the input...
research
11/06/2019

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

The success of image perturbations that are designed to fool image class...
research
01/31/2021

Towards Imperceptible Query-limited Adversarial Attacks with Perceptual Feature Fidelity Loss

Recently, there has been a large amount of work towards fooling deep-lea...
research
10/16/2022

Perceptual-Score: A Psychophysical Measure for Assessing the Biological Plausibility of Visual Recognition Models

For the last decade, convolutional neural networks (CNNs) have vastly su...
research
09/02/2020

Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation

Adversarial examples have shown that albeit highly accurate, models lear...
research
12/17/2017

Microbial community structure predicted by the stable marriage problem

Experimental studies of microbial communities routinely reveal several s...

Please sign up or login with your details

Forgot password? Click here to reset