Continual Learning with Deep Generative Replay

05/24/2017
by   Hanul Shin, et al.
0

Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.

READ FULL TEXT

page 6

page 8

research
05/17/2021

Shared and Private VAEs with Generative Replay for Continual Learning

Continual learning tries to learn new tasks without forgetting previousl...
research
05/07/2020

Generative Feature Replay with Orthogonal Weight Modification for Continual Learning

The ability of intelligent agents to learn and remember multiple tasks s...
research
06/03/2019

Continual Learning of New Sound Classes using Generative Replay

Continual learning consists in incrementally training a model on a seque...
research
03/23/2023

Adiabatic replay for continual learning

Conventional replay-based approaches to continual learning (CL) require,...
research
05/23/2017

Continual Learning in Generative Adversarial Nets

Developments in deep generative models have allowed for tractable learni...
research
05/18/2018

Learning and Inference Movement with Deep Generative Model

Learning and inference movement is a very challenging problem due to its...
research
06/09/2021

Match What Matters: Generative Implicit Feature Replay for Continual Learning

Neural networks are prone to catastrophic forgetting when trained increm...

Please sign up or login with your details

Forgot password? Click here to reset