Overcoming Catastrophic Forgetting by Soft Parameter Pruning

12/04/2018
by   Jian Peng, et al.
0

Catastrophic forgetting is a challenge issue in continual learning when a deep neural network forgets the knowledge acquired from the former task after learning on subsequent tasks. However, existing methods try to find the joint distribution of parameters shared with all tasks. This idea can be questionable because this joint distribution may not present when the number of tasks increase. On the other hand, It also leads to "long-term" memory issue when the network capacity is limited since adding tasks will "eat" the network capacity. In this paper, we proposed a Soft Parameters Pruning (SPP) strategy to reach the trade-off between short-term and long-term profit of a learning model by freeing those parameters less contributing to remember former task domain knowledge to learn future tasks, and preserving memories about previous tasks via those parameters effectively encoding knowledge about tasks at the same time. The SPP also measures the importance of parameters by information entropy in a label free manner. The experiments on several tasks shows SPP model achieved the best performance compared with others state-of-the-art methods. Experiment results also indicate that our method is less sensitive to hyper-parameter and better generalization. Our research suggests that a softer strategy, i.e. approximate optimize or sub-optimal solution, will benefit alleviating the dilemma of memory. The source codes are available at https://github.com/lehaifeng/Learning_by_memory.

READ FULL TEXT

page 7

page 8

research
12/19/2019

Overcoming Long-term Catastrophic Forgetting through Adversarial Neural Pruning and Synaptic Consolidation

Enabling a neural network to sequentially learn multiple tasks is of gre...
research
11/27/2017

Memory Aware Synapses: Learning what (not) to forget

Humans can learn in a continuous manner. Old rarely utilized knowledge c...
research
07/19/2022

Incremental Task Learning with Incremental Rank Updates

Incremental Task learning (ITL) is a category of continual learning that...
research
05/20/2019

Continual Learning in Deep Neural Network by Using a Kalman Optimiser

Learning and adapting to new distributions or learning new tasks sequent...
research
06/26/2023

Parameter-Level Soft-Masking for Continual Learning

Existing research on task incremental learning in continual learning has...
research
09/20/2023

Create and Find Flatness: Building Flat Training Spaces in Advance for Continual Learning

Catastrophic forgetting remains a critical challenge in the field of con...
research
01/24/2022

PaRT: Parallel Learning Towards Robust and Transparent AI

This paper takes a parallel learning approach for robust and transparent...

Please sign up or login with your details

Forgot password? Click here to reset