First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning

03/23/2023
by   Aristeidis Panos, et al.
0

In Class-Incremental Learning (CIL) an image classification system is exposed to new classes in each learning session and must be updated incrementally. Methods approaching this problem have updated both the classification head and the feature extractor body at each session of CIL. In this work, we develop a baseline method, First Session Adaptation (FSA), that sheds light on the efficacy of existing CIL approaches and allows us to assess the relative performance contributions from head and body adaption. FSA adapts a pre-trained neural network body only on the first learning session and fixes it thereafter; a head based on linear discriminant analysis (LDA), is then placed on top of the adapted body, allowing exact updates through CIL. FSA is replay-free i.e. it does not memorize examples from previous sessions of continual learning. To empirically motivate FSA, we first consider a diverse selection of 22 image-classification datasets, evaluating different heads and body adaptation techniques in high/low-shot offline settings. We find that the LDA head performs well and supports CIL out-of-the-box. We also find that Featurewise Layer Modulation (FiLM) adapters are highly effective in the few-shot setting, and full-body adaption in the high-shot setting. Second, we empirically investigate various CIL settings including high-shot CIL and few-shot CIL, including settings that have previously been used in the literature. We show that FSA significantly improves over the state-of-the-art in 15 of the 16 settings considered. FSA with FiLM adapters is especially performant in the few-shot setting. These results indicate that current approaches to continuous body adaptation are not working as expected. Finally, we propose a measure that can be applied to a set of unlabelled inputs which is predictive of the benefits of body adaptation.

READ FULL TEXT
research
08/20/2022

A Multi-Head Model for Continual Learning via Out-of-Distribution Replay

This paper studies class incremental learning (CIL) of continual learnin...
research
10/06/2022

CLIP model is an Efficient Continual Learner

The continual learning setting aims to learn new tasks over time without...
research
07/23/2020

ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation

Session-based recommendation has received growing attention recently due...
research
05/03/2023

Evolving Dictionary Representation for Few-shot Class-incremental Learning

New objects are continuously emerging in the dynamically changing world ...
research
05/26/2023

Balanced Supervised Contrastive Learning for Few-Shot Class-Incremental Learning

Few-shot class-incremental learning (FSCIL) presents the primary challen...
research
09/15/2022

On the Soft-Subnetwork for Few-shot Class Incremental Learning

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which hypothes...
research
06/22/2021

Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification

We propose unsupervised embedding adaptation for the downstream few-shot...

Please sign up or login with your details

Forgot password? Click here to reset