Fixed Inter-Neuron Covariability Induces Adversarial Robustness

08/07/2023
by   Muhammad Ahmed Shah, et al.
0

The vulnerability to adversarial perturbations is a major flaw of Deep Neural Networks (DNNs) that raises question about their reliability when in real-world scenarios. On the other hand, human perception, which DNNs are supposed to emulate, is highly robust to such perturbations, indicating that there may be certain features of the human perception that make it robust but are not represented in the current class of DNNs. One such feature is that the activity of biological neurons is correlated and the structure of this correlation tends to be rather rigid over long spans of times, even if it hampers performance and learning. We hypothesize that integrating such constraints on the activations of a DNN would improve its adversarial robustness, and, to test this hypothesis, we have developed the Self-Consistent Activation (SCA) layer, which comprises of neurons whose activations are consistent with each other, as they conform to a fixed, but learned, covariability pattern. When evaluated on image and sound recognition tasks, the models with a SCA layer achieved high accuracy, and exhibited significantly greater robustness than multi-layer perceptron models to state-of-the-art Auto-PGD adversarial attacks without being trained on adversarially perturbed data

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2023

Training on Foveated Images Improves Robustness to Adversarial Attacks

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
02/26/2022

Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations

While end-to-end training of Deep Neural Networks (DNNs) yields state of...
research
06/05/2023

Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception

Deep neural networks (DNNs) are known to have a fundamental sensitivity ...
research
06/16/2020

On sparse connectivity, adversarial robustness, and a novel model of the artificial neuron

Deep neural networks have achieved human-level accuracy on almost all pe...
research
02/02/2022

Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Recent work suggests that representations learned by adversarially robus...
research
07/21/2019

Shallow Unorganized Neural Networks using Smart Neuron Model for Visual Perception

The recent success of Deep Neural Networks (DNNs) has revealed the signi...
research
11/12/2021

Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception

Adversarial examples are often cited by neuroscientists and machine lear...

Please sign up or login with your details

Forgot password? Click here to reset