Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework

05/19/2022
by   Gehui Shen, et al.
0

Continual learning requires the model to maintain the learned knowledge while learning from a non-i.i.d data stream continually. Due to the single-pass training setting, online continual learning is very challenging, but it is closer to the real-world scenarios where quick adaptation to new data is appealing. In this paper, we focus on online class-incremental learning setting in which new classes emerge over time. Almost all existing methods are replay-based with a softmax classifier. However, the inherent logits bias problem in the softmax classifier is a main cause of catastrophic forgetting while existing solutions are not applicable for online settings. To bypass this problem, we abandon the softmax classifier and propose a novel generative framework based on the feature space. In our framework, a generative classifier which utilizes replay memory is used for inference, and the training objective is a pair-based metric learning loss which is proven theoretically to optimize the feature space in a generative way. In order to improve the ability to learn new data, we further propose a hybrid of generative and discriminative loss to train the model. Extensive experiments on several benchmarks, including newly introduced task-free datasets, show that our method beats a series of state-of-the-art replay-based methods with discriminative classifiers, and reduces catastrophic forgetting consistently with a remarkable margin.

READ FULL TEXT
research
03/22/2021

Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning

Online class-incremental continual learning (CL) studies the problem of ...
research
05/30/2023

Prediction Error-based Classification for Class-Incremental Learning

Class-incremental learning (CIL) is a particularly challenging variant o...
research
05/24/2023

Dealing with Cross-Task Class Discrimination in Online Continual Learning

Existing continual learning (CL) research regards catastrophic forgettin...
research
03/20/2023

Offline-Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement

This paper investigates a new, practical, but challenging problem named ...
research
02/16/2023

New Insights on Relieving Task-Recency Bias for Online Class Incremental Learning

To imitate the ability of keeping learning of human, continual learning ...
research
12/17/2020

Incremental Learning from Low-labelled Stream Data in Open-Set Video Face Recognition

Deep Learning approaches have brought solutions, with impressive perform...
research
03/26/2023

Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning

In Continual learning (CL) balancing effective adaptation while combatin...

Please sign up or login with your details

Forgot password? Click here to reset