DeepAI AI Chat
Log In Sign Up

Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning

by   Danruo Deng, et al.
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
Tianjin University
The Chinese University of Hong Kong

The backpropagation networks are notably susceptible to catastrophic forgetting, where networks tend to forget previously learned skills upon learning new ones. To address such the 'sensitivity-stability' dilemma, most previous efforts have been contributed to minimizing the empirical risk with different parameter regularization terms and episodic memory, but rarely exploring the usages of the weight loss landscape. In this paper, we investigate the relationship between the weight loss landscape and sensitivity-stability in the continual learning scenario, based on which, we propose a novel method, Flattening Sharpness for Dynamic Gradient Projection Memory (FS-DGPM). In particular, we introduce a soft weight to represent the importance of each basis representing past tasks in GPM, which can be adaptively learned during the learning process, so that less important bases can be dynamically released to improve the sensitivity of new skill learning. We further introduce Flattening Sharpness (FS) to reduce the generalization gap by explicitly regulating the flatness of the weight loss landscape of all seen tasks. As demonstrated empirically, our proposed method consistently outperforms baselines with the superior ability to learn new skills while alleviating forgetting effectively.


Gradient Episodic Memory with a Soft Constraint for Continual Learning

Catastrophic forgetting in continual learning is a common destructive ph...

Continual learning of quantum state classification with gradient episodic memory

Continual learning is one of the many areas of machine learning research...

Continual Deep Learning by Functional Regularisation of Memorable Past

Continually learning new skills is important for intelligent systems, ye...

Natural continual learning: success is a journey, not (just) a destination

Biological agents are known to learn many different tasks over the cours...

Continual Learning with Scaled Gradient Projection

In neural networks, continual learning results in gradient interference ...

A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal

Online continual learning (OCL) aims to train neural networks incrementa...

Gradient Projection Memory for Continual Learning

The ability to learn continually without forgetting the past tasks is a ...