Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception

11/12/2021
by   Joel Dapello, et al.
8

Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems. Recent work has proposed adding biologically-inspired components to visual neural networks as a way to improve their adversarial robustness. One surprisingly effective component for reducing adversarial vulnerability is response stochasticity, like that exhibited by biological neurons. Here, using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. Next, we generalize these results to the auditory domain, showing that neural stochasticity also makes auditory models more robust to adversarial perturbations. Geometric analysis of the stochastic networks reveals overlap between representations of clean and adversarially perturbed stimuli, and quantitatively demonstrates that competing geometric effects of stochasticity mediate a tradeoff between adversarial and clean performance. Our results shed light on the strategies of robust perception utilized by adversarially trained and stochastic networks, and help explain how stochasticity may be beneficial to machine and biological computation.

READ FULL TEXT

page 12

page 15

page 16

page 20

page 21

page 27

page 31

page 37

research
10/03/2020

Adversarial and Natural Perturbations for General Robustness

In this paper we aim to explore the general robustness of neural network...
research
02/02/2022

Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Recent work suggests that representations learned by adversarially robus...
research
03/11/2022

Perception Over Time: Temporal Dynamics for Robust Image Understanding

While deep learning surpasses human-level performance in narrow and spec...
research
02/09/2021

Adversarial Perturbations Are Not So Weird: Entanglement of Robust and Non-Robust Features in Neural Network Classifiers

Neural networks trained on visual data are well-known to be vulnerable t...
research
07/08/2020

Fast Training of Deep Neural Networks Robust to Adversarial Perturbations

Deep neural networks are capable of training fast and generalizing well ...
research
08/07/2023

Fixed Inter-Neuron Covariability Induces Adversarial Robustness

The vulnerability to adversarial perturbations is a major flaw of Deep N...
research
08/26/2023

Brain-like representational straightening of natural movies in robust feedforward neural networks

Representational straightening refers to a decrease in curvature of visu...

Please sign up or login with your details

Forgot password? Click here to reset