Logarithmic Continual Learning

01/17/2022
by   Wojciech Masarczyk, et al.
0

We introduce a neural network architecture that logarithmically reduces the number of self-rehearsal steps in the generative rehearsal of continually learned models. In continual learning (CL), training samples come in subsequent tasks, and the trained model can access only a single task at a time. To replay previous samples, contemporary CL methods bootstrap generative models and train them recursively with a combination of current and regenerated past data. This recurrence leads to superfluous computations as the same past samples are regenerated after each task, and the reconstruction quality successively degrades. In this work, we address these limitations and propose a new generative rehearsal architecture that requires at most logarithmic number of retraining for each sample. Our approach leverages allocation of past data in a set of generative models such that most of them do not require retraining after a task. The experimental evaluation of our logarithmic continual learning approach shows the superiority of our method with respect to the state-of-the-art generative rehearsal methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2021

Generative Kernel Continual learning

Kernel continual learning by <cit.> has recently emerged as a strong con...
research
06/24/2020

Insights from the Future for Continual Learning

Continual learning aims to learn tasks sequentially, with (often severe)...
research
05/19/2023

Few-Shot Continual Learning for Conditional Generative Adversarial Networks

In few-shot continual learning for generative models, a target mode must...
research
02/11/2022

Continual Learning with Invertible Generative Models

Catastrophic forgetting (CF) happens whenever a neural network overwrite...
research
07/05/2020

Pseudo-Rehearsal for Continual Learning with Normalizing Flows

Catastrophic forgetting (CF) happens whenever a neural network overwrite...
research
01/22/2021

Continual Learning of Generative Models with Limited Data: From Wasserstein-1 Barycenter to Adaptive Coalescence

Learning generative models is challenging for a network edge node with l...
research
09/15/2023

SHAPNN: Shapley Value Regularized Tabular Neural Network

We present SHAPNN, a novel deep tabular data modeling architecture desig...

Please sign up or login with your details

Forgot password? Click here to reset