Forget-free Continual Learning with Soft-Winning SubNetworks

03/27/2023
by   Haeyong Kang, et al.
17

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which states that competitive smooth (non-binary) subnetworks exist within a dense network in continual learning tasks, we investigate two proposed architecture-based continual learning methods which sequentially learn and select adaptive binary- (WSN) and non-binary Soft-Subnetworks (SoftNet) for each task. WSN and SoftNet jointly learn the regularized model weights and task-adaptive non-binary masks of subnetworks associated with each task whilst attempting to select a small set of weights to be activated (winning ticket) by reusing weights of the prior subnetworks. Our proposed WSN and SoftNet are inherently immune to catastrophic forgetting as each selected subnetwork model does not infringe upon other subnetworks in Task Incremental Learning (TIL). In TIL, binary masks spawned per winning ticket are encoded into one N-bit binary digit mask, then compressed using Huffman coding for a sub-linear increase in network capacity to the number of tasks. Surprisingly, in the inference step, SoftNet generated by injecting small noises to the backgrounds of acquired WSN (holding the foregrounds of WSN) provides excellent forward transfer power for future tasks in TIL. SoftNet shows its effectiveness over WSN in regularizing parameters to tackle the overfitting, to a few examples in Few-shot Class Incremental Learning (FSCIL).

READ FULL TEXT

page 11

page 17

page 18

research
06/26/2023

Parameter-Level Soft-Masking for Continual Learning

Existing research on task incremental learning in continual learning has...
research
09/15/2022

On the Soft-Subnetwork for Few-shot Class Incremental Learning

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which hypothes...
research
11/21/2019

Continual Learning with Adaptive Weights (CLAW)

Approaches to continual learning aim to successfully learn a set of rela...
research
08/14/2023

Ada-QPacknet – adaptive pruning with bit width reduction as an efficient continual learning method without forgetting

Continual Learning (CL) is a process in which there is still huge gap be...
research
10/05/2022

ImpressLearn: Continual Learning via Combined Task Impressions

This work proposes a new method to sequentially train a deep neural netw...
research
04/05/2019

Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning

Models trained in the context of continual learning (CL) should be able ...
research
10/22/2018

A neuro-inspired architecture for unsupervised continual learning based on online clustering and hierarchical predictive coding

We propose that the Continual Learning desiderata can be achieved throug...

Please sign up or login with your details

Forgot password? Click here to reset