Distillation Techniques for Pseudo-rehearsal Based Incremental Learning

07/08/2018
by   Haseeb Shah, et al.
0

The ability to learn from incrementally arriving data is essential for any life-long learning system. However, standard deep neural networks forget the knowledge about the old tasks, a phenomenon called catastrophic forgetting, when trained on incrementally arriving data. We discuss the biases in current Generative Adversarial Networks (GAN) based approaches that learn the classifier by knowledge distillation from previously trained classifiers. These biases cause the trained classifier to perform poorly. We propose an approach to remove these biases by distilling knowledge from the classifier of AC-GAN. Experiments on MNIST and CIFAR10 show that this method is comparable to current state of the art rehearsal based approaches. The code for this paper is available at this https://github.com/haseebs/Pseudo-rehearsal-Incremental-Learninglink.

READ FULL TEXT
research
07/08/2018

Revisiting Distillation and Incremental Classifier Learning

One of the key differences between the learning mechanism of humans and ...
research
07/05/2019

Incremental Concept Learning via Online Generative Memory Recall

The ability to learn more and more concepts over time from incrementally...
research
04/17/2021

On Learning the Geodesic Path for Incremental Learning

Neural networks notoriously suffer from the problem of catastrophic forg...
research
12/27/2019

DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier

In this era of digital information explosion, an abundance of data from ...
research
02/08/2019

EILearn: Learning Incrementally Using Previous Knowledge Obtained From an Ensemble of Classifiers

We propose an algorithm for incremental learning of classifiers. The pro...
research
02/02/2018

Incremental Classifier Learning with Generative Adversarial Networks

In this paper, we address the incremental classifier learning problem, w...
research
04/20/2023

eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation

Class-Incremental Learning (CIL) aims to solve the neural networks' cata...

Please sign up or login with your details

Forgot password? Click here to reset