Revisiting Distillation and Incremental Classifier Learning

07/08/2018
by   Khurram Javed, et al.
0

One of the key differences between the learning mechanism of humans and Artificial Neural Networks (ANNs) is the ability of humans to learn one task at a time. ANNs, on the other hand, can only learn multiple tasks simultaneously. Any attempts at learning new tasks incrementally cause them to completely forget about previous tasks. This lack of ability to learn incrementally, called Catastrophic Forgetting, is considered a major hurdle in building a true AI system. In this paper, our goal is to isolate the truly effective existing ideas for incremental learning from those that only work under certain conditions. To this end, we first thoroughly analyze the current state of the art (iCaRL) method for incremental learning and demonstrate that the good performance of the system is not because of the reasons presented in the existing literature. We conclude that the success of iCaRL is primarily due to knowledge distillation and recognize a key limitation of knowledge distillation, i.e, it often leads to bias in classifiers. Finally, we propose a dynamic threshold moving algorithm that is able to successfully remove this bias. We demonstrate the effectiveness of our algorithm on CIFAR100 and MNIST datasets showing near-optimal results. Our implementation is available at https://github.com/Khurramjaved96/incremental-learning.

READ FULL TEXT
research
07/08/2018

Distillation Techniques for Pseudo-rehearsal Based Incremental Learning

The ability to learn from incrementally arriving data is essential for a...
research
11/08/2019

Knowledge Distillation for Incremental Learning in Semantic Segmentation

Although deep learning architectures have shown remarkable results in sc...
research
04/28/2020

Small-Task Incremental Learning

Lifelong learning has attracted much attention, but existing works still...
research
04/17/2021

On Learning the Geodesic Path for Incremental Learning

Neural networks notoriously suffer from the problem of catastrophic forg...
research
03/11/2022

Deep Class Incremental Learning from Decentralized Data

In this paper, we focus on a new and challenging decentralized machine l...
research
06/15/2021

Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation

In recent years the ubiquitous deployment of AI has posed great concerns...
research
10/04/2022

Positive Pair Distillation Considered Harmful: Continual Meta Metric Learning for Lifelong Object Re-Identification

Lifelong object re-identification incrementally learns from a stream of ...

Please sign up or login with your details

Forgot password? Click here to reset