Gradient Mask: Lateral Inhibition Mechanism Improves Performance in Artificial Neural Networks

08/14/2022
by   Lei Jiang, et al.
0

Lateral inhibitory connections have been observed in the cortex of the biological brain, and has been extensively studied in terms of its role in cognitive functions. However, in the vanilla version of backpropagation in deep learning, all gradients (which can be understood to comprise of both signal and noise gradients) flow through the network during weight updates. This may lead to overfitting. In this work, inspired by biological lateral inhibition, we propose Gradient Mask, which effectively filters out noise gradients in the process of backpropagation. This allows the learned feature information to be more intensively stored in the network while filtering out noisy or unimportant features. Furthermore, we demonstrate analytically how lateral inhibition in artificial neural networks improves the quality of propagated gradients. A new criterion for gradient quality is proposed which can be used as a measure during training of various convolutional neural networks (CNNs). Finally, we conduct several different experiments to study how Gradient Mask improves the performance of the network both quantitatively and qualitatively. Quantitatively, accuracy in the original CNN architecture, accuracy after pruning, and accuracy after adversarial attacks have shown improvements. Qualitatively, the CNN trained using Gradient Mask has developed saliency maps that focus primarily on the object of interest, which is useful for data augmentation and network interpretability.

READ FULL TEXT

page 5

page 7

research
01/31/2019

Training Artificial Neural Networks by Generalized Likelihood Ratio Method: Exploring Brain-like Learning to Improve Adversarial Defensiveness

Recent work in deep learning has shown that the artificial neural networ...
research
05/15/2023

Training Neural Networks without Backpropagation: A Deeper Dive into the Likelihood Ratio Method

Backpropagation (BP) is the most important gradient estimation method fo...
research
04/08/2022

Biologically-inspired neuronal adaptation improves learning in neural networks

Since humans still outperform artificial neural networks on many tasks, ...
research
07/30/2023

Pupil Learning Mechanism

Studies on artificial neural networks rarely address both vanishing grad...
research
12/06/2020

Representaciones del aprendizaje reutilizando los gradientes de la retropropagacion

This work proposes an algorithm for taking advantage of backpropagation ...
research
11/29/2021

Improving Deep Learning Interpretability by Saliency Guided Training

Saliency methods have been widely used to highlight important input feat...
research
04/23/2021

GuideBP: Guiding Backpropagation Through Weaker Pathways of Parallel Logits

Convolutional neural networks often generate multiple logits and use sim...

Please sign up or login with your details

Forgot password? Click here to reset