Generalized Few-Shot Continual Learning with Contrastive Mixture of Adapters

02/12/2023
by   Yawen Cui, et al.
0

The goal of Few-Shot Continual Learning (FSCL) is to incrementally learn novel tasks with limited labeled samples and preserve previous capabilities simultaneously, while current FSCL methods are all for the class-incremental purpose. Moreover, the evaluation of FSCL solutions is only the cumulative performance of all encountered tasks, but there is no work on exploring the domain generalization ability. Domain generalization is a challenging yet practical task that aims to generalize beyond training domains. In this paper, we set up a Generalized FSCL (GFSCL) protocol involving both class- and domain-incremental situations together with the domain generalization assessment. Firstly, two benchmark datasets and protocols are newly arranged, and detailed baselines are provided for this unexplored configuration. We find that common continual learning methods have poor generalization ability on unseen domains and cannot better cope with the catastrophic forgetting issue in cross-incremental tasks. In this way, we further propose a rehearsal-free framework based on Vision Transformer (ViT) named Contrastive Mixture of Adapters (CMoA). Due to different optimization targets of class increment and domain increment, the CMoA contains two parts: (1) For the class-incremental issue, the Mixture of Adapters (MoA) module is incorporated into ViT, then cosine similarity regularization and the dynamic weighting are designed to make each adapter learn specific knowledge and concentrate on particular classes. (2) For the domain-related issues and domain-invariant representation learning, we alleviate the inner-class variation by prototype-calibrated contrastive learning. The codes and protocols are available at https://github.com/yawencui/CMoA.

READ FULL TEXT

page 3

page 5

page 15

research
03/16/2023

Rehearsal-Free Domain Continual Face Anti-Spoofing: Generalize More and Forget Less

Face Anti-Spoofing (FAS) is recently studied under the continual learnin...
research
05/20/2023

Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

Task-incremental continual learning refers to continually training a mod...
research
09/18/2023

DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery Clues

The malicious use and widespread dissemination of deepfake pose a signif...
research
03/16/2023

Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning

Prototype, as a representation of class embeddings, has been explored to...
research
05/19/2023

AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning

Continual learning aims to enable a model to incrementally learn knowled...
research
03/24/2023

CCL: Continual Contrastive Learning for LiDAR Place Recognition

Place recognition is an essential and challenging task in loop closing a...
research
07/20/2022

Rethinking Few-Shot Class-Incremental Learning with Open-Set Hypothesis in Hyperbolic Geometry

Few-Shot Class-Incremental Learning (FSCIL) aims at incrementally learni...

Please sign up or login with your details

Forgot password? Click here to reset