IB-DRR: Incremental Learning with Information-Back Discrete Representation Replay

04/21/2021
by   Jian Jiang, et al.
15

Incremental learning aims to enable machine learning models to continuously acquire new knowledge given new classes, while maintaining the knowledge already learned for old classes. Saving a subset of training samples of previously seen classes in the memory and replaying them during new training phases is proven to be an efficient and effective way to fulfil this aim. It is evident that the larger number of exemplars the model inherits the better performance it can achieve. However, finding a trade-off between the model performance and the number of samples to save for each class is still an open problem for replay-based incremental learning and is increasingly desirable for real-life applications. In this paper, we approach this open problem by tapping into a two-step compression approach. The first step is a lossy compression, we propose to encode input images and save their discrete latent representations in the form of codes that are learned using a hierarchical Vector Quantised Variational Autoencoder (VQ-VAE). In the second step, we further compress codes losslessly by learning a hierarchical latent variable model with bits-back asymmetric numeral systems (BB-ANS). To compensate for the information lost in the first step compression, we introduce an Information Back (IB) mechanism that utilizes real exemplars for a contrastive learning loss to regularize the training of a classifier. By maintaining all seen exemplars' representations in the format of `codes', Discrete Representation Replay (DRR) outperforms the state-of-art method on CIFAR-100 by a margin of 4 memory cost required for saving samples. Incorporated with IB and saving a small set of old raw exemplars as well, the accuracy of DRR can be further improved by 2

READ FULL TEXT

page 1

page 5

page 11

page 12

research
02/14/2022

Memory Replay with Data Compression for Continual Learning

Continual learning needs to overcome catastrophic forgetting of the past...
research
08/03/2023

Balanced Destruction-Reconstruction Dynamics for Memory-replay Class Incremental Learning

Class incremental learning (CIL) aims to incrementally update a trained ...
research
03/24/2023

Remind of the Past: Incremental Learning with Analogical Prompts

Although data-free incremental learning methods are memory-friendly, acc...
research
04/21/2021

Lossless Compression with Latent Variable Models

We develop a simple and elegant method for lossless compression using la...
research
03/30/2020

Incremental Learning In Online Scenario

Modern deep learning approaches have achieved great success in many visi...
research
03/29/2021

ClaRe: Practical Class Incremental Learning By Remembering Previous Class Representations

This paper presents a practical and simple yet efficient method to effec...
research
12/13/2020

Open-World Class Discovery with Kernel Networks

We study an Open-World Class Discovery problem in which, given labeled t...

Please sign up or login with your details

Forgot password? Click here to reset