Compacting, Picking and Growing for Unforgetting Continual Learning

10/15/2019
by   Steven C. Y. Hung, et al.
0

Continual lifelong learning is essential to many applications. In this paper, we propose a simple but effective approach to continual deep learning. Our approach leverages the principles of deep model compression with weight pruning, critical weights selection, and progressive networks expansion. By enforcing their integration in an iterative manner, we introduce an incremental learning method that is scalable to the number of sequential tasks in a continual learning process. Our approach is easy to implement and owns several favorable characteristics. First, it can avoid forgetting (i.e., learn new tasks while remembering all previous tasks). Second, it allows model expansion but can maintain the model compactness when handling sequential tasks. Besides, through our compaction and selection/expanding mechanism, we show that the knowledge accumulated through learning previous tasks is helpful to adapt to a better model for the new tasks compared to training the models independently with tasks. Experimental results show that our approach can incrementally learn a deep model to tackle multiple tasks without forgetting, while the model compactness is maintained with the performance more satisfiable than ndividual task training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2018

Reinforced Continual Learning

Most artificial intelligence models have limiting ability to solve new t...
research
08/14/2023

Ada-QPacknet – adaptive pruning with bit width reduction as an efficient continual learning method without forgetting

Continual Learning (CL) is a process in which there is still huge gap be...
research
03/11/2019

Continual Learning via Neural Pruning

We introduce Continual Learning via Neural Pruning (CLNP), a new method ...
research
04/08/2020

Continual Learning with Gated Incremental Memories for sequential data processing

The ability to learn in dynamic, nonstationary environments without forg...
research
04/21/2020

Bayesian Nonparametric Weight Factorization for Continual Learning

Naively trained neural networks tend to experience catastrophic forgetti...
research
11/18/2022

Building a Subspace of Policies for Scalable Continual Learning

The ability to continuously acquire new knowledge and skills is crucial ...
research
05/16/2018

Progress & Compress: A scalable framework for continual learning

We introduce a conceptually simple and scalable framework for continual ...

Please sign up or login with your details

Forgot password? Click here to reset