SpaceNet: Make Free Space For Continual Learning

07/15/2020
by   Ghada Sokar, et al.
15

The continual learning (CL) paradigm aims to enable neural networks to learn tasks continually in a sequential fashion. The fundamental challenge in this learning paradigm is catastrophic forgetting previously learned tasks when the model is optimized for a new task, especially when their data is not accessible. Current architectural-based methods aim at alleviating the catastrophic forgetting problem but at the expense of expanding the capacity of the model. Regularization-based methods maintain a fixed model capacity; however, previous studies showed the huge performance degradation of these methods when the task identity is not available during inference (e.g. class incremental learning scenario). In this work, we propose a novel architectural-based method referred as SpaceNet for class incremental learning scenario where we utilize the available fixed capacity of the model intelligently. SpaceNet trains sparse deep neural networks from scratch in an adaptive way that compresses the sparse connections of each task in a compact number of neurons. The adaptive training of the sparse connections results in sparse representations that reduce the interference between the tasks. Experimental results show the robustness of our proposed method against catastrophic forgetting old tasks and the efficiency of SpaceNet in utilizing the available capacity of the model, leaving space for more tasks to be learned. In particular, when SpaceNet is tested on the well-known benchmarks for CL: split MNIST, split Fashion-MNIST, and CIFAR-10/100, it outperforms regularization-based methods by a big performance gap. Moreover, it achieves better performance than architectural-based methods without model expansion and achieved comparable results with rehearsal-based methods, while offering a huge memory reduction.

READ FULL TEXT
research
01/15/2021

Learning Invariant Representation for Continual Learning

Continual learning aims to provide intelligent agents that are capable o...
research
07/18/2019

Autoencoder-Based Incremental Class Learning without Retraining on Old Data

Incremental class learning, a scenario in continual learning context whe...
research
06/14/2018

Selfless Sequential Learning

Sequential learning studies the problem of learning tasks in a sequence ...
research
09/15/2023

Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization

The pursuit of long-term autonomy mandates that robotic agents must cont...
research
08/09/2022

Continual Prune-and-Select: Class-incremental learning with specialized subnetworks

The human brain is capable of learning tasks sequentially mostly without...
research
08/21/2023

When Prompt-based Incremental Learning Does Not Meet Strong Pretraining

Incremental learning aims to overcome catastrophic forgetting when learn...
research
05/16/2023

CQural: A Novel CNN based Hybrid Architecture for Quantum Continual Machine Learning

Training machine learning models in an incremental fashion is not only i...

Please sign up or login with your details

Forgot password? Click here to reset