Biologically inspired protection of deep networks from adversarial attacks

03/27/2017
by   Aran Nayebi, et al.
0

Inspired by biophysical principles underlying nonlinear dendritic computation in neural circuits, we develop a scheme to train deep neural networks to make them robust to adversarial attacks. Our scheme generates highly nonlinear, saturated neural networks that achieve state of the art performance on gradient based adversarial examples on MNIST, despite never being exposed to adversarially chosen examples during training. Moreover, these networks exhibit unprecedented robustness to targeted, iterative schemes for generating adversarial examples, including second-order methods. We further identify principles governing how these networks achieve their robustness, drawing on methods from information geometry. We find these networks progressively create highly flat and compressed internal representations that are sensitive to very few input dimensions, while still solving the task. Moreover, they employ highly kurtotic weight distributions, also found in the brain, and we demonstrate how such kurtosis can protect even linear classifiers from adversarial attack.

READ FULL TEXT
research
04/05/2017

Comment on "Biologically inspired protection of deep networks from adversarial attacks"

A recent paper suggests that Deep Neural Networks can be protected from ...
research
02/01/2019

A New Family of Neural Networks Provably Resistant to Adversarial Attacks

Adversarial attacks add perturbations to the input features with the int...
research
12/23/2020

Gradient-Free Adversarial Attacks for Bayesian Neural Networks

The existence of adversarial examples underscores the importance of unde...
research
11/21/2021

Denoised Internal Models: a Brain-Inspired Autoencoder against Adversarial Attacks

Despite its great success, deep learning severely suffers from robustnes...
research
10/06/2021

Adversarial Attacks on Machinery Fault Diagnosis

Despite the great progress of neural network-based (NN-based) machinery ...
research
05/23/2018

Robust Perception through Analysis by Synthesis

The intriguing susceptibility of deep neural networks to minimal input p...
research
10/20/2020

Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods

Deep neural networks have achieved state-of-the-art performance in a var...

Please sign up or login with your details

Forgot password? Click here to reset