Generative replay with feedback connections as a general strategy for continual learning

09/27/2018
by   Gido M. van de Ven, et al.
4

Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning problematic. Recently, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more meaningful comparisons, we identified three distinct continual learning scenarios based on whether task identity is known and, if it is not, whether it needs to be inferred. Performing the split and permuted MNIST task protocols according to each of these scenarios, we found that regularization-based approaches (e.g., elastic weight consolidation) failed when task identity needed to be inferred. In contrast, generative replay combined with distillation (i.e., using class probabilities as "soft targets") achieved superior performance in all three scenarios. In addition, we reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback connections. This Replay-through-Feedback approach substantially shortened training time with no or negligible loss in performance. We believe this to be an important first step towards making the powerful technique of generative replay scalable to real-world continual learning applications.

READ FULL TEXT

page 4

page 5

page 9

research
04/15/2019

Three scenarios for continual learning

Standard artificial neural networks suffer from the well-known issue of ...
research
05/07/2020

Generative Feature Replay with Orthogonal Weight Modification for Continual Learning

The ability of intelligent agents to learn and remember multiple tasks s...
research
10/29/2018

Marginal Replay vs Conditional Replay for Continual Learning

We present a new replay-based method of continual classification learnin...
research
05/17/2021

Shared and Private VAEs with Generative Replay for Continual Learning

Continual learning tries to learn new tasks without forgetting previousl...
research
07/04/2022

Progressive Latent Replay for efficient Generative Rehearsal

We introduce a new method for internal replay that modulates the frequen...
research
05/26/2022

Continual evaluation for lifelong learning: Identifying the stability gap

Introducing a time dependency on the data generating distribution has pr...
research
02/16/2021

Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

Continual (or "incremental") learning approaches are employed when addit...

Please sign up or login with your details

Forgot password? Click here to reset