Dynamically Modular and Sparse General Continual Learning

01/02/2023
by   Arnav Varma, et al.
8

Real-world applications often require learning continuously from a stream of data under ever-changing conditions. When trying to learn from such non-stationary data, deep neural networks (DNNs) undergo catastrophic forgetting of previously learned information. Among the common approaches to avoid catastrophic forgetting, rehearsal-based methods have proven effective. However, they are still prone to forgetting due to task-interference as all parameters respond to all tasks. To counter this, we take inspiration from sparse coding in the brain and introduce dynamic modularity and sparsity (Dynamos) for rehearsal-based general continual learning. In this setup, the DNN learns to respond to stimuli by activating relevant subsets of neurons. We demonstrate the effectiveness of Dynamos on multiple datasets under challenging continual learning evaluation protocols. Finally, we show that our method learns representations that are modular and specialized, while maintaining reusability by activating subsets of neurons with overlaps corresponding to the similarity of stimuli.

READ FULL TEXT

page 6

page 7

page 8

page 12

research
03/12/2022

Sparsity and Heterogeneous Dropout for Continual Learning in the Null Space of Neural Activations

Continual/lifelong learning from a non-stationary input data stream is a...
research
07/15/2021

Algorithmic insights on continual learning from fruit flies

Continual learning in computational systems is challenging due to catast...
research
12/08/2022

Bio-Inspired, Task-Free Continual Learning through Activity Regularization

The ability to sequentially learn multiple tasks without forgetting is a...
research
06/16/2019

Conditional Computation for Continual Learning

Catastrophic forgetting of connectionist neural networks is caused by th...
research
12/10/2019

Reducing Catastrophic Forgetting in Modular Neural Networks by Dynamic Information Balancing

Lifelong learning is a very important step toward realizing robust auton...
research
01/25/2022

Representation learnt by SGD and Adaptive learning rules – Conditions that Vary Sparsity and Selectivity in Neural Network

From the point of view of the human brain, continual learning can perfor...
research
07/29/2021

Few-Shot and Continual Learning with Attentive Independent Mechanisms

Deep neural networks (DNNs) are known to perform well when deployed to t...

Please sign up or login with your details

Forgot password? Click here to reset