Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

02/02/2022
by   Anne Harrington, et al.
6

Recent work suggests that representations learned by adversarially robust networks are more human perceptually-aligned than non-robust networks via image manipulations. Despite appearing closer to human visual perception, it is unclear if the constraints in robust DNN representations match biological constraints found in human vision. Human vision seems to rely on texture-based/summary statistic representations in the periphery, which have been shown to explain phenomena such as crowding and performance on visual search tasks. To understand how adversarially robust optimizations/representations compare to human vision, we performed a psychophysics experiment using a set of metameric discrimination tasks where we evaluated how well human observers could distinguish between images synthesized to match adversarially robust representations compared to non-robust representations and a texture synthesis model of peripheral vision (Texforms). We found that the discriminability of robust representation and texture model images decreased to near chance performance as stimuli were presented farther in the periphery. Moreover, performance on robust and texture-model images showed similar trends within participants, while performance on non-robust representations changed minimally across the visual field. These results together suggest that (1) adversarially robust representations capture peripheral computation better than non-robust representations and (2) robust representations capture peripheral computation similar to current state-of-the-art texture peripheral vision models. More broadly, our findings support the idea that localized texture summary statistic representations may drive human invariance to adversarial perturbations and that the incorporation of such representations in DNNs could give rise to useful properties like adversarial robustness.

READ FULL TEXT

page 3

page 17

page 18

page 19

page 20

page 21

research
11/12/2021

Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception

Adversarial examples are often cited by neuroscientists and machine lear...
research
06/20/2022

Understanding Robust Learning through the Lens of Representation Similarities

Representation learning, i.e. the generation of representations useful f...
research
11/24/2020

Towards Imperceptible Universal Attacks on Texture Recognition

Although deep neural networks (DNNs) have been shown to be susceptible t...
research
08/07/2023

Fixed Inter-Neuron Covariability Induces Adversarial Robustness

The vulnerability to adversarial perturbations is a major flaw of Deep N...
research
06/13/2021

Inverting Adversarially Robust Networks for Image Synthesis

Recent research in adversarially robust classifiers suggests their repre...
research
01/27/2023

Alignment with human representations supports robust few-shot learning

Should we care whether AI systems have representations of the world that...
research
03/11/2022

Perception Over Time: Temporal Dynamics for Robust Image Understanding

While deep learning surpasses human-level performance in narrow and spec...

Please sign up or login with your details

Forgot password? Click here to reset