An Analysis of Initial Training Strategies for Exemplar-Free Class-Incremental Learning

08/22/2023
by   Grégoire Petit, et al.
0

Class-Incremental Learning (CIL) aims to build classification models from data streams. At each step of the CIL process, new classes must be integrated into the model. Due to catastrophic forgetting, CIL is particularly challenging when examples from past classes cannot be stored, the case on which we focus here. To date, most approaches are based exclusively on the target dataset of the CIL process. However, the use of models pre-trained in a self-supervised way on large amounts of data has recently gained momentum. The initial model of the CIL process may only use the first batch of the target dataset, or also use pre-trained weights obtained on an auxiliary dataset. The choice between these two initial learning strategies can significantly influence the performance of the incremental learning model, but has not yet been studied in depth. Performance is also influenced by the choice of the CIL algorithm, the neural architecture, the nature of the target task, the distribution of classes in the stream and the number of examples available for learning. We conduct a comprehensive experimental study to assess the roles of these factors. We present a statistical analysis framework that quantifies the relative contribution of each factor to incremental performance. Our main finding is that the initial training strategy is the dominant factor influencing the average incremental accuracy, but that the choice of CIL algorithm is more important in preventing forgetting. Based on this analysis, we propose practical recommendations for choosing the right initial training strategy for a given incremental learning use case. These recommendations are intended to facilitate the practical deployment of incremental learning.

READ FULL TEXT
research
05/20/2019

Catastrophic forgetting: still a problem for DNNs

We investigate the performance of DNNs when trained on class-incremental...
research
11/18/2021

Self-Supervised Class Incremental Learning

Existing Class Incremental Learning (CIL) methods are based on a supervi...
research
11/24/2021

Coarse-To-Fine Incremental Few-Shot Learning

Different from fine-tuning models pre-trained on a large-scale dataset o...
research
10/28/2021

Bridging Non Co-occurrence with Unlabeled In-the-wild Data for Incremental Object Detection

Deep networks have shown remarkable results in the task of object detect...
research
10/16/2021

Dataset Knowledge Transfer for Class-Incremental Learning without Memory

Incremental learning enables artificial agents to learn from sequential ...
research
12/09/2021

Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning

Class Incremental Learning (CIL) aims at learning a multi-class classifi...
research
02/01/2023

Towards Label-Efficient Incremental Learning: A Survey

The current dominant paradigm when building a machine learning model is ...

Please sign up or login with your details

Forgot password? Click here to reset