Stimulating STDP to Exploit Locality for Lifelong Learning without Catastrophic Forgetting

02/08/2019
by   Jason M. Allred, et al.
0

Stochastic gradient descent requires that training samples be drawn from a uniformly random distribution of the data. For a deployed system that must learn online from an uncontrolled and unknown environment, the ordering of input samples often fails to meet this criterion, making lifelong learning a difficult challenge. We exploit the locality of the unsupervised Spike Timing Dependent Plasticity (STDP) learning rule to target subsets of a segmented Spiking Neural Network (SNN) to adapt to novel information while protecting the information in the remainder of the SNN from catastrophic forgetting. In our system, novel information triggers stimulated firing, inspired by biological dopamine signals, to boost STDP in the synapses of neurons associated with outlier information. This targeting controls the forgetting process in a way that reduces accuracy degradation while learning new information. Our preliminary results on the MNIST dataset validate the capability of such a system to learn successfully over time from an unknown, changing environment, achieving up to 93.88

READ FULL TEXT

page 5

page 6

research
11/18/2021

Continuous learning of spiking networks trained with local rules

Artificial neural networks (ANNs) experience catastrophic forgetting (CF...
research
03/22/2017

ASP: Learning to Forget with Adaptive Synaptic Plasticity in Spiking Neural Networks

A fundamental feature of learning in animals is the "ability to forget" ...
research
05/03/2020

Continuous Learning in a Single-Incremental-Task Scenario with Spike Features

Deep Neural Networks (DNNs) have two key deficiencies, their dependence ...
research
03/27/2018

Bayesian Gradient Descent: Online Variational Bayes Learning with Increased Robustness to Catastrophic Forgetting and Weight Pruning

We suggest a novel approach for the estimation of the posterior distribu...
research
12/09/2021

Reducing Catastrophic Forgetting in Self Organizing Maps with Internally-Induced Generative Replay

A lifelong learning agent is able to continually learn from potentially ...
research
11/15/2022

Selective Memory Recursive Least Squares: Uniformly Allocated Approximation Capabilities of RBF Neural Networks in Real-Time Learning

When performing real-time learning tasks, the radial basis function neur...
research
05/02/2022

Revisiting Gaussian Neurons for Online Clustering with Unknown Number of Clusters

Despite the recent success of artificial neural networks, more biologica...

Please sign up or login with your details

Forgot password? Click here to reset