NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks

06/18/2022
by   Mustafa Burak Gurbuz, et al.
0

The goal of continual learning (CL) is to learn different tasks over time. The main desiderata associated with CL are to maintain performance on older tasks, leverage the latter to improve learning of future tasks, and to introduce minimal overhead in the training process (for instance, to not require a growing model or retraining). We propose the Neuro-Inspired Stability-Plasticity Adaptation (NISPA) architecture that addresses these desiderata through a sparse neural network with fixed density. NISPA forms stable paths to preserve learned knowledge from older tasks. Also, NISPA uses connection rewiring to create new plastic paths that reuse existing knowledge on novel tasks. Our extensive evaluation on EMNIST, FashionMNIST, CIFAR10, and CIFAR100 datasets shows that NISPA significantly outperforms representative state-of-the-art continual learning baselines, and it uses up to ten times fewer learnable parameters compared to baselines. We also make the case that sparsity is an essential ingredient for continual learning. The NISPA code is available at https://github.com/BurakGurbuz97/NISPA.

READ FULL TEXT

page 7

page 8

research
02/17/2023

New Insights for the Stability-Plasticity Dilemma in Online Continual Learning

The aim of continual learning is to learn new tasks continuously (i.e., ...
research
04/29/2022

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

Language Models (LMs) become outdated as the world changes; they often f...
research
09/03/2020

Compression-aware Continual Learning using Singular Value Decomposition

We propose a compression based continual task learning method that can d...
research
09/25/2022

Exploring Example Influence in Continual Learning

Continual Learning (CL) sequentially learns new tasks like human beings,...
research
04/12/2021

Continual Learning for Text Classification with Information Disentanglement Based Regularization

Continual learning has become increasingly important as it enables NLP m...
research
03/22/2022

Meta-attention for ViT-backed Continual Learning

Continual learning is a longstanding research topic due to its crucial r...
research
04/05/2019

Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning

Models trained in the context of continual learning (CL) should be able ...

Please sign up or login with your details

Forgot password? Click here to reset