Match What Matters: Generative Implicit Feature Replay for Continual Learning

06/09/2021
by   Kevin Thandiackal, et al.
0

Neural networks are prone to catastrophic forgetting when trained incrementally on different tasks. In order to prevent forgetting, most existing methods retain a small subset of previously seen samples, which in turn can be used for joint training with new tasks. While this is indeed effective, it may not always be possible to store such samples, e.g., due to data protection regulations. In these cases, one can instead employ generative models to create artificial samples or features representing memories from previous tasks. Following a similar direction, we propose GenIFeR (Generative Implicit Feature Replay) for class-incremental learning. The main idea is to train a generative adversarial network (GAN) to generate images that contain realistic features. While the generator creates images at full resolution, the discriminator only sees the corresponding features extracted by the continually trained classifier. Since the classifier compresses raw images into features that are actually relevant for classification, the GAN can match this target distribution more accurately. On the other hand, allowing the generator to create full resolution images has several benefits: In contrast to previous approaches, the feature extractor of the classifier does not have to be frozen. In addition, we can employ augmentations on generated images, which not only boosts classification performance, but also mitigates discriminator overfitting during GAN training. We empirically show that GenIFeR is superior to both conventional generative image and feature replay. In particular, we significantly outperform the state-of-the-art in generative replay for various settings on the CIFAR-100 and CUB-200 datasets.

READ FULL TEXT

page 15

page 16

page 17

page 18

research
04/20/2020

Generative Feature Replay For Class-Incremental Learning

Humans are capable of learning new tasks without forgetting previous one...
research
09/06/2018

Memory Replay GANs: learning to generate images from new categories without forgetting

Previous works on sequential learning address the problem of forgetting ...
research
12/02/2019

Is Discriminator a Good Feature Extractor?

Discriminator from generative adversarial nets (GAN) has been used by so...
research
02/16/2019

DC-Al GAN: Pseudoprogression and True Tumor Progression of Glioblastoma multiform Image Classification Based On DCGAN and Alexnet

Glioblastoma multiform (GBM) is a kind of head tumor with an extraordina...
research
11/27/2019

GRIm-RePR: Prioritising Generating Important Features for Pseudo-Rehearsal

Pseudo-rehearsal allows neural networks to learn a sequence of tasks wit...
research
05/19/2023

AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning

Continual learning aims to enable a model to incrementally learn knowled...
research
05/24/2017

Continual Learning with Deep Generative Replay

Attempts to train a comprehensive artificial intelligence capable of sol...

Please sign up or login with your details

Forgot password? Click here to reset