Continual Learning with Scaled Gradient Projection

02/02/2023
by   Gobinda Saha, et al.
0

In neural networks, continual learning results in gradient interference among sequential tasks, leading to catastrophic forgetting of old tasks while learning new ones. This issue is addressed in recent methods by storing the important gradient spaces for old tasks and updating the model orthogonally during new tasks. However, such restrictive orthogonal gradient updates hamper the learning capability of the new tasks resulting in sub-optimal performance. To improve new learning while minimizing forgetting, in this paper we propose a Scaled Gradient Projection (SGP) method, where we combine the orthogonal gradient projections with scaled gradient steps along the important gradient spaces for the past tasks. The degree of gradient scaling along these spaces depends on the importance of the bases spanning them. We propose an efficient method for computing and accumulating importance of these bases using the singular value decomposition of the input representations for each task. We conduct extensive experiments ranging from continual image classification to reinforcement learning tasks and report better performance with less training overhead than the state-of-the-art approaches.

READ FULL TEXT
research
03/17/2021

Gradient Projection Memory for Continual Learning

The ability to learn continually without forgetting the past tasks is a ...
research
05/17/2021

Layerwise Optimization by Gradient Decomposition for Continual Learning

Deep neural networks achieve state-of-the-art and sometimes super-human ...
research
10/15/2019

Orthogonal Gradient Descent for Continual Learning

Neural networks are achieving state of the art and sometimes super-human...
research
07/25/2022

Balancing Stability and Plasticity through Advanced Null Space in Continual Learning

Continual learning is a learning paradigm that learns tasks sequentially...
research
10/09/2021

Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning

The backpropagation networks are notably susceptible to catastrophic for...
research
05/30/2023

Learning without Forgetting for Vision-Language Models

Class-Incremental Learning (CIL) or continual learning is a desired capa...
research
11/28/2022

Progressive Learning without Forgetting

Learning from changing tasks and sequential experience without forgettin...

Please sign up or login with your details

Forgot password? Click here to reset