Incremental Learning via Rate Reduction

11/30/2020
by   Ziyang Wu, et al.
0

Current deep learning architectures suffer from catastrophic forgetting, a failure to retain knowledge of previously learned classes when incrementally trained on new classes. The fundamental roadblock faced by deep learning methods is that deep learning models are optimized as "black boxes," making it difficult to properly adjust the model parameters to preserve knowledge about previously seen data. To overcome the problem of catastrophic forgetting, we propose utilizing an alternative "white box" architecture derived from the principle of rate reduction, where each layer of the network is explicitly computed without back propagation. Under this paradigm, we demonstrate that, given a pre-trained network and new data classes, our approach can provably construct a new network that emulates joint training with all past and new classes. Finally, our experiments show that our proposed learning algorithm observes significantly less decay in classification performance, outperforming state of the art methods on MNIST and CIFAR-10 by a large margin and justifying the use of "white box" algorithms for incremental learning even for sufficiently complex image data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2020

Brain-Inspired Model for Incremental Learning Using a Few Examples

Incremental learning attempts to develop a classifier which learns conti...
research
06/08/2018

SupportNet: solving catastrophic forgetting in class incremental learning with support data

A plain well-trained deep learning model often does not have the ability...
research
04/07/2022

Incremental Prototype Prompt-tuning with Pre-trained Representation for Class Incremental Learning

Class incremental learning has attracted much attention, but most existi...
research
07/31/2019

Incremental Learning Techniques for Semantic Segmentation

Deep learning architectures exhibit a critical drop of performance due t...
research
06/26/2022

Class Impression for Data-free Incremental Learning

Standard deep learning-based classification approaches require collectin...
research
05/21/2021

ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction

This work attempts to provide a plausible theoretical framework that aim...
research
03/31/2020

A Simple Class Decision Balancing for Incremental Learning

Class incremental learning (CIL) problem, in which a learning agent cont...

Please sign up or login with your details

Forgot password? Click here to reset