Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting

11/12/2020
by   Zeke Xie, et al.
1

Deep learning is often criticized by two serious issues which rarely exist in natural nervous systems: overfitting and catastrophic forgetting. It can even memorize randomly labelled data, which has little knowledge behind the instance-label pairs. When a deep network continually learns over time by accommodating new tasks, it usually quickly overwrites the knowledge learned from previous tasks. Referred to as the neural variability, it is well-known in neuroscience that human brain reactions exhibit substantial variability even in response to the same stimulus. This mechanism balances accuracy and plasticity/flexibility in the motor learning of natural nervous systems. Thus it motivates us to design a similar mechanism named artificial neural variability (ANV), which helps artificial neural networks learn some advantages from "natural" neural networks. We rigorously prove that ANV plays as an implicit regularizer of the mutual information between the training data and the learned model. This result theoretically guarantees ANV a strictly improved generalizability, robustness to label noise, and robustness to catastrophic forgetting. We then devise a neural variable risk minimization (NVRM) framework and neural variable optimizers to achieve ANV for conventional network architectures in practice. The empirical studies demonstrate that NVRM can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2020

Synaptic Metaplasticity in Binarized Neural Networks

While deep neural networks have surpassed human performance in multiple ...
research
02/02/2018

Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization

Humans and most animals can learn new tasks without forgetting old ones....
research
07/30/2023

Pupil Learning Mechanism

Studies on artificial neural networks rarely address both vanishing grad...
research
12/24/2022

Utilizing Priming to Identify Optimal Class Ordering to Alleviate Catastrophic Forgetting

In order for artificial neural networks to begin accurately mimicking bi...
research
12/13/2011

Supervised Generative Reconstruction: An Efficient Way To Flexibly Store and Recognize Patterns

Matching animal-like flexibility in recognition and the ability to quick...
research
01/03/2023

Detecting Information Relays in Deep Neural Networks

Deep learning of artificial neural networks (ANNs) is creating highly fu...
research
05/19/2021

Variability of Artificial Neural Networks

What makes an artificial neural network easier to train and more likely ...

Please sign up or login with your details

Forgot password? Click here to reset