Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations

02/26/2022
by   Metehan Cekic, et al.
0

While end-to-end training of Deep Neural Networks (DNNs) yields state of the art performance in an increasing array of applications, it does not provide insight into, or control over, the features being extracted. We report here on a promising neuro-inspired approach to DNNs with sparser and stronger activations. We use standard stochastic gradient training, supplementing the end-to-end discriminative cost function with layer-wise costs promoting Hebbian ("fire together," "wire together") updates for highly active neurons, and anti-Hebbian updates for the remaining neurons. Instead of batch norm, we use divisive normalization of activations (suppressing weak outputs using strong outputs), along with implicit ℓ_2 normalization of neuronal weights. Experiments with standard image classification tasks on CIFAR-10 demonstrate that, relative to baseline end-to-end trained architectures, our proposed architecture (a) leads to sparser activations (with only a slight compromise on accuracy), (b) exhibits more robustness to noise (without being trained on noisy data), (c) exhibits more robustness to adversarial perturbations (without adversarial training).

READ FULL TEXT
research
08/07/2023

Fixed Inter-Neuron Covariability Induces Adversarial Robustness

The vulnerability to adversarial perturbations is a major flaw of Deep N...
research
05/22/2017

Regularizing deep networks using efficient layerwise adversarial training

Adversarial training has been shown to regularize deep neural networks i...
research
06/03/2021

Stochastic Whitening Batch Normalization

Batch Normalization (BN) is a popular technique for training Deep Neural...
research
07/13/2020

Nested Learning For Multi-Granular Tasks

Standard deep neural networks (DNNs) are commonly trained in an end-to-e...
research
11/19/2018

Generalizable Adversarial Training via Spectral Normalization

Deep neural networks (DNNs) have set benchmarks on a wide array of super...
research
09/19/2021

On the Noise Stability and Robustness of Adversarially Trained Networks on NVM Crossbars

Applications based on Deep Neural Networks (DNNs) have grown exponential...
research
02/13/2020

Regularizing activations in neural networks via distribution matching with the Wasserstein metric

Regularization and normalization have become indispensable components in...

Please sign up or login with your details

Forgot password? Click here to reset