Generative Feature Replay with Orthogonal Weight Modification for Continual Learning

05/07/2020
by   Gehui Shen, et al.
0

The ability of intelligent agents to learn and remember multiple tasks sequentially is crucial to achieving artificial general intelligence. Many continual learning (CL) methods have been proposed to overcome catastrophic forgetting. Catastrophic forgetting notoriously impedes the sequential learning of neural networks as the data of previous tasks are unavailable. In this paper we focus on class incremental learning, a challenging CL scenario, in which classes of each task are disjoint and task identity is unknown during test. For this scenario, generative replay is an effective strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting. However, it is not trivial to learn a generative model continually for relatively complex data. Based on recently proposed orthogonal weight modification (OWM) algorithm which can keep previously learned input-output mappings invariant approximately when learning new tasks, we propose to directly generate and replay feature. Empirical results on image and text datasets show our method can improve OWM consistently by a significant margin while conventional generative replay always results in a negative effect. Our method also beats a state-of-the-art generative replay method and is competitive with a strong baseline based on real data storage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2021

Defeating Catastrophic Forgetting via Enhanced Orthogonal Weights Modification

The ability of neural networks (NNs) to learn and remember multiple task...
research
06/10/2020

Self-Supervised Learning Aided Class-Incremental Lifelong Learning

Lifelong or continual learning remains to be a challenge for artificial ...
research
04/20/2020

Generative Feature Replay For Class-Incremental Learning

Humans are capable of learning new tasks without forgetting previous one...
research
09/27/2018

Generative replay with feedback connections as a general strategy for continual learning

Standard artificial neural networks suffer from the well-known issue of ...
research
05/24/2017

Continual Learning with Deep Generative Replay

Attempts to train a comprehensive artificial intelligence capable of sol...
research
10/17/2022

Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator

When a deep learning model is sequentially trained on different datasets...
research
09/06/2018

Memory Replay GANs: learning to generate images from new categories without forgetting

Previous works on sequential learning address the problem of forgetting ...

Please sign up or login with your details

Forgot password? Click here to reset