Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence

01/30/2018
by   Arslan Chaudhry, et al.
0

We study the incremental learning problem for the classification task, a key component in developing life-long learning systems. The main challenges while learning in an incremental manner are to preserve and update the knowledge of the model. In this work, we propose a generalization of Path Integral (Zenke et al., 2017) and EWC (Kirkpatrick et al., 2016 with a theoretically grounded KL-divergence based perspective. We show that, to preserve and update the knowledge, regularizing the model's likelihood distribution is more intuitive and provides better insights to the problem. To do so, we use KL-divergence as a measure of distance which is equivalent to computing distance in a Riemannian manifold induced by the Fisher information matrix. Furthermore, to enhance the learning flexibility, the regularization is weighted by a parameter importance score that is calculated along the entire training trajectory. Contrary to forgetting, as the algorithm progresses, the regularized loss makes the network intransigent, resulting in its inability to discriminate new tasks from the old ones. We show that this problem of intransigence can be addressed by storing a small subset of representative samples from previous datasets. In addition, in order to evaluate the performance of an incremental learning algorithm, we introduce two novel metrics to evaluate forgetting and intransigence. Experimental evaluation on incremental version of MNIST and CIFAR-100 classification datasets shows that our approach outperforms existing state-of-the-art baselines in all the evaluation metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2019

Incremental Classifier Learning Based on PEDCC-Loss and Cosine Distance

The main purpose of incremental learning is to learn new knowledge while...
research
02/03/2019

Incremental Learning with Maximum Entropy Regularization: Rethinking Forgetting and Intransigence

Incremental learning suffers from two challenging problems; forgetting o...
research
04/28/2020

Small-Task Incremental Learning

Lifelong learning has attracted much attention, but existing works still...
research
02/19/2015

Forgetting and consolidation for incremental and cumulative knowledge acquisition systems

The application of cognitive mechanisms to support knowledge acquisition...
research
03/28/2022

Energy-based Latent Aligner for Incremental Learning

Deep learning models tend to forget their earlier knowledge while increm...
research
07/26/2021

In Defense of the Learning Without Forgetting for Task Incremental Learning

Catastrophic forgetting is one of the major challenges on the road for c...

Please sign up or login with your details

Forgot password? Click here to reset