Utility-based Perturbed Gradient Descent: An Optimizer for Continual Learning

02/07/2023
by   Mohamed Elsayed, et al.
0

Modern representation learning methods may fail to adapt quickly under non-stationarity since they suffer from the problem of catastrophic forgetting and decaying plasticity. Such problems prevent learners from fast adaptation to changes since they result in increasing numbers of saturated features and forgetting useful features when presented with new experiences. Hence, these methods are rendered ineffective for continual learning. This paper proposes Utility-based Perturbed Gradient Descent (UPGD), an online representation-learning algorithm well-suited for continual learning agents with no knowledge about task boundaries. UPGD protects useful weights or features from forgetting and perturbs less useful ones based on their utilities. Our empirical results show that UPGD alleviates catastrophic forgetting and decaying plasticity, enabling modern representation learning methods to work in the continual learning setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2021

Does Continual Learning = Catastrophic Forgetting?

Continual learning is known for suffering from catastrophic forgetting, ...
research
03/27/2022

Continual learning: a feature extraction formalization, an efficient algorithm, and fundamental obstructions

Continual learning is an emerging paradigm in machine learning, wherein ...
research
10/07/2020

A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix

Continual learning (CL) is a setting in which an agent has to learn from...
research
05/25/2023

SketchOGD: Memory-Efficient Continual Learning

When machine learning models are trained continually on a sequence of ta...
research
09/28/2021

Formalizing the Generalization-Forgetting Trade-off in Continual Learning

We formulate the continual learning (CL) problem via dynamic programming...
research
03/22/2021

Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification

In this work, we study the phenomenon of catastrophic forgetting in the ...
research
03/03/2022

Provable and Efficient Continual Representation Learning

In continual learning (CL), the goal is to design models that can learn ...

Please sign up or login with your details

Forgot password? Click here to reset