Adiabatic replay for continual learning

03/23/2023
by   Alexander Krawczyk, et al.
0

Conventional replay-based approaches to continual learning (CL) require, for each learning phase with new data, the replay of samples representing all of the previously learned knowledge in order to avoid catastrophic forgetting. Since the amount of learned knowledge grows over time in CL problems, generative replay spends an increasing amount of time just re-learning what is already known. In this proof-of-concept study, we propose a replay-based CL strategy that we term adiabatic replay (AR), which derives its efficiency from the (reasonable) assumption that each new learning phase is adiabatic, i.e., represents only a small addition to existing knowledge. Each new learning phase triggers a sampling process that selectively replays, from the body of existing knowledge, just such samples that are similar to the new data, in contrast to replaying all of it. Complete replay is not required since AR represents the data distribution by GMMs, which are capable of selectively updating their internal representation only where data statistics have changed. As long as additions are adiabatic, the amount of to-be-replayed samples need not to depend on the amount of previously acquired knowledge at all. We verify experimentally that AR is superior to state-of-the-art deep generative replay using VAEs.

READ FULL TEXT

page 7

page 9

research
06/23/2022

Sample Condensation in Online Continual Learning

Online Continual learning is a challenging learning scenario where the m...
research
07/08/2023

Integrating Curricula with Replays: Its Effects on Continual Learning

Humans engage in learning and reviewing processes with curricula when ac...
research
06/03/2019

Continual Learning of New Sound Classes using Generative Replay

Continual learning consists in incrementally training a model on a seque...
research
10/17/2022

Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator

When a deep learning model is sequentially trained on different datasets...
research
06/22/2020

Automatic Recall Machines: Internal Replay, Continual Learning and the Brain

Replay in neural networks involves training on sequential data with memo...
research
05/24/2017

Continual Learning with Deep Generative Replay

Attempts to train a comprehensive artificial intelligence capable of sol...
research
02/25/2019

S-TRIGGER: Continual State Representation Learning via Self-Triggered Generative Replay

We consider the problem of building a state representation model for con...

Please sign up or login with your details

Forgot password? Click here to reset