A Comprehensive Study of Class Incremental Learning Algorithms for Visual Tasks

11/03/2020
by   Eden Belouadah, et al.
0

The ability of artificial agents to increment their capabilities when confronted with new data is an open challenge in artificial intelligence. The main challenge faced in such cases is catastrophic forgetting, i.e., the tendency of neural networks to underfit past data when new ones are ingested. A first group of approaches tackles catastrophic forgetting by increasing deep model capacity to accommodate new knowledge. A second type of approaches fix the deep model size and introduce a mechanism whose objective is to ensure a good compromise between stability and plasticity of the model. While the first type of algorithms were compared thoroughly, this is not the case for methods which exploit a fixed size model. Here, we focus on the latter, place them in a common conceptual and experimental framework and propose the following contributions: (1) define six desirable properties of incremental learning algorithms and analyze them according to these properties, (2) introduce a unified formalization of the class-incremental learning problem, (3) propose a common evaluation framework which is more thorough than existing ones in terms of number of datasets, size of datasets, size of bounded memory and number of incremental states, (4) investigate the usefulness of herding for past exemplars selection, (5) provide experimental evidence that it is possible to obtain competitive performance without the use of knowledge distillation to tackle catastrophic forgetting, and (6) facilitate reproducibility by integrating all tested methods in a common open-source repository. The main experimental finding is that none of the existing algorithms achieves the best results in all evaluated settings. Important differences arise notably if a bounded memory of past classes is allowed or not.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2022

PlaStIL: Plastic and Stable Memory-Free Class-Incremental Learning

Plasticity and stability are needed in class-incremental learning in ord...
research
09/11/2023

MultIOD: Rehearsal-free Multihead Incremental Object Detector

Class-Incremental learning (CIL) is the ability of artificial agents to ...
research
03/23/2021

Balanced Softmax Cross-Entropy for Incremental Learning

Deep neural networks are prone to catastrophic forgetting when increment...
research
10/16/2021

Dataset Knowledge Transfer for Class-Incremental Learning without Memory

Incremental learning enables artificial agents to learn from sequential ...
research
08/25/2020

Active Class Incremental Learning for Imbalanced Datasets

Incremental Learning (IL) allows AI systems to adapt to streamed data. M...
research
02/03/2021

Do Not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting

One of the major limitations of deep learning models is that they face c...
research
08/20/2018

DeeSIL: Deep-Shallow Incremental Learning

Incremental Learning (IL) is an interesting AI problem when the algorith...

Please sign up or login with your details

Forgot password? Click here to reset