A Strategy for an Uncompromising Incremental Learner

05/02/2017
by   Ragav Venkatesan, et al.
0

Multi-class supervised learning systems require the knowledge of the entire range of labels they predict. Often when learnt incrementally, they suffer from catastrophic forgetting. To avoid this, generous leeways have to be made to the philosophy of incremental learning that either forces a part of the machine to not learn, or to retrain the machine again with a selection of the historic data. While these hacks work to various degrees, they do not adhere to the spirit of incremental learning. In this article, we redefine incremental learning with stringent conditions that do not allow for any undesirable relaxations and assumptions. We design a strategy involving generative models and the distillation of dark knowledge as a means of hallucinating data along with appropriate targets from past distributions. We call this technique, phantom sampling.We show that phantom sampling helps avoid catastrophic forgetting during incremental learning. Using an implementation based on deep neural networks, we demonstrate that phantom sampling dramatically avoids catastrophic forgetting. We apply these strategies to competitive multi-class incremental learning of deep neural networks. Using various benchmark datasets and through our strategy, we demonstrate that strict incremental learning could be achieved. We further put our strategy to test on challenging cases, including cross-domain increments and incrementing on a novel label space. We also propose a trivial extension to unbounded-continual learning and identify potential for future development.

READ FULL TEXT

page 4

page 6

page 10

research
02/27/2020

Brain-Inspired Model for Incremental Learning Using a Few Examples

Incremental learning attempts to develop a classifier which learns conti...
research
08/05/2021

Quantum Continual Learning Overcoming Catastrophic Forgetting

Catastrophic forgetting describes the fact that machine learning models ...
research
03/29/2019

Incremental Learning with Unlabeled Data in the Wild

Deep neural networks are known to suffer from catastrophic forgetting in...
research
11/17/2017

Generation and Consolidation of Recollections for Efficient Deep Lifelong Learning

Deep lifelong learning systems need to efficiently manage resources to s...
research
01/07/2022

An Incremental Learning Approach to Automatically Recognize Pulmonary Diseases from the Multi-vendor Chest Radiographs

Pulmonary diseases can cause severe respiratory problems, leading to sud...
research
10/04/2021

Incremental Class Learning using Variational Autoencoders with Similarity Learning

Catastrophic forgetting in neural networks during incremental learning r...
research
07/29/2022

Conservative Generator, Progressive Discriminator: Coordination of Adversaries in Few-shot Incremental Image Synthesis

The capacity to learn incrementally from an online stream of data is an ...

Please sign up or login with your details

Forgot password? Click here to reset