Adaptive Group Sparse Regularization for Continual Learning

03/30/2020
by   Sangwon Jung, et al.
0

We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties. Our method selectively employs the two penalties when learning each node based its the importance, which is adaptively updated after learning each new task. By utilizing the proximal gradient descent method for learning, the exact sparsity and freezing of the model is guaranteed, and thus, the learner can explicitly control the model capacity as the learning continues. Furthermore, as a critical detail, we re-initialize the weights associated with unimportant nodes after learning each task in order to prevent the negative transfer that causes the catastrophic forgetting and facilitate efficient learning of new tasks. Throughout the extensive experimental results, we show that our AGS-CL uses much less additional memory space for storing the regularization parameters, and it significantly outperforms several state-of-the-art baselines on representative continual learning benchmarks for both supervised and reinforcement learning tasks.

READ FULL TEXT
research
05/28/2019

Uncertainty-based Continual Learning with Adaptive Regularization

We introduce a new regularization-based continual learning algorithm, du...
research
10/09/2020

Continual learning using hash-routed convolutional neural networks

Continual learning could shift the machine learning paradigm from data c...
research
02/25/2019

ORACLE: Order Robust Adaptive Continual LEarning

The order of the tasks a continual learning model encounters may have la...
research
05/28/2019

Single-Net Continual Learning with Progressive Segmented Training (PST)

There is an increasing need of continual learning in dynamic systems, su...
research
06/12/2020

CPR: Classifier-Projection Regularization for Continual Learning

We propose a general, yet simple patch that can be applied to existing r...
research
05/10/2021

Continual Learning via Bit-Level Information Preserving

Continual learning tackles the setting of learning different tasks seque...
research
06/15/2021

Natural continual learning: success is a journey, not (just) a destination

Biological agents are known to learn many different tasks over the cours...

Please sign up or login with your details

Forgot password? Click here to reset