Bio-Inspired, Task-Free Continual Learning through Activity Regularization

12/08/2022
by   Francesco Lässig, et al.
0

The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.

READ FULL TEXT
research
03/12/2022

Sparsity and Heterogeneous Dropout for Continual Learning in the Null Space of Neural Activations

Continual/lifelong learning from a non-stationary input data stream is a...
research
01/02/2023

Dynamically Modular and Sparse General Continual Learning

Real-world applications often require learning continuously from a strea...
research
08/26/2021

Continual learning under domain transfer with sparse synaptic bursting

Existing machines are functionally specific tools that were made for eas...
research
07/15/2021

Algorithmic insights on continual learning from fruit flies

Continual learning in computational systems is challenging due to catast...
research
09/15/2023

Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization

The pursuit of long-term autonomy mandates that robotic agents must cont...
research
12/28/2022

Sparse Coding in a Dual Memory System for Lifelong Learning

Efficient continual learning in humans is enabled by a rich set of neuro...
research
09/05/2018

Hierarchical Selective Recruitment in Linear-Threshold Brain Networks - Part I: Intra-Layer Dynamics and Selective Inhibition

Goal-driven selective attention (GDSA) refers to the brain's function of...

Please sign up or login with your details

Forgot password? Click here to reset