Synaptic Metaplasticity in Binarized Neural Networks

03/07/2020
by   Axel Laborieux, et al.
27

While deep neural networks have surpassed human performance in multiple situations, they are prone to catastrophic forgetting: upon training a new task, they rapidly forget previously learned ones. Neuroscience studies, based on idealized tasks, suggest that in the brain, synapses overcome this issue by adjusting their plasticity depending on their past history. However, such "metaplastic" behaviour has never been leveraged to mitigate catastrophic forgetting in deep neural networks. In this work, we highlight a connection between metaplasticity models and the training process of binarized neural networks, a low-precision version of deep neural networks. Building on this idea, we propose and demonstrate experimentally, in situations of multitask and stream learning, a training technique that prevents catastrophic forgetting without needing previously presented data, nor formal boundaries between datasets. We support our approach with a theoretical analysis on a tractable task. This work bridges computational neuroscience and deep learning, and presents significant assets for future embedded and neuromorphic systems.

READ FULL TEXT

page 1

page 6

page 10

research
02/02/2018

Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization

Humans and most animals can learn new tasks without forgetting old ones....
research
11/12/2020

Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting

Deep learning is often criticized by two serious issues which rarely exi...
research
01/07/2020

Intrinsic Motivation and Episodic Memories for Robot Exploration of High-Dimensional Sensory Spaces

This work presents an architecture that generates curiosity-driven goal-...
research
07/22/2022

Revisiting Parameter Reuse to Overcome Catastrophic Forgetting in Neural Networks

Neural networks tend to forget previously learned knowledge when continu...
research
01/08/2018

Boundary Optimizing Network (BON)

Despite all the success that deep neural networks have seen in classifyi...
research
07/18/2022

The Multiple Subnetwork Hypothesis: Enabling Multidomain Learning by Isolating Task-Specific Subnetworks in Feedforward Neural Networks

Neural networks have seen an explosion of usage and research in the past...
research
04/27/2020

A general approach to progressive learning

In biological learning, data is used to improve performance on the task ...

Please sign up or login with your details

Forgot password? Click here to reset