Continuous Learning in a Single-Incremental-Task Scenario with Spike Features

05/03/2020
by   Ruthvik Vaila, et al.
0

Deep Neural Networks (DNNs) have two key deficiencies, their dependence on high precision computing and their inability to perform sequential learning, that is, when a DNN is trained on a first task and the same DNN is trained on the next task it forgets the first task. This phenomenon of forgetting previous tasks is also referred to as catastrophic forgetting. On the other hand a mammalian brain outperforms DNNs in terms of energy efficiency and the ability to learn sequentially without catastrophically forgetting. Here, we use bio-inspired Spike Timing Dependent Plasticity (STDP)in the feature extraction layers of the network with instantaneous neurons to extract meaningful features. In the classification sections of the network we use a modified synaptic intelligence that we refer to as cost per synapse metric as a regularizer to immunize the network against catastrophic forgetting in a Single-Incremental-Task scenario (SIT). In this study, we use MNIST handwritten digits dataset that was divided into five sub-tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2019

Catastrophic forgetting: still a problem for DNNs

We investigate the performance of DNNs when trained on class-incremental...
research
03/02/2019

Attention-Based Structural-Plasticity

Catastrophic forgetting/interference is a critical problem for lifelong ...
research
05/20/2019

A comprehensive, application-oriented study of catastrophic forgetting in DNNs

We present a large-scale empirical study of catastrophic forgetting (CF)...
research
02/07/2021

Online Limited Memory Neural-Linear Bandits with Likelihood Matching

We study neural-linear bandits for solving problems where both explorati...
research
02/08/2019

Stimulating STDP to Exploit Locality for Lifelong Learning without Catastrophic Forgetting

Stochastic gradient descent requires that training samples be drawn from...
research
04/13/2022

Sapinet: A sparse event-based spatiotemporal oscillator for learning in the wild

We introduce Sapinet – a spike timing (event)-based multilayer neural ne...
research
08/09/2022

Continual Prune-and-Select: Class-incremental learning with specialized subnetworks

The human brain is capable of learning tasks sequentially mostly without...

Please sign up or login with your details

Forgot password? Click here to reset