Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning

03/16/2023
by   Zhuowei Li, et al.
2

Prototype, as a representation of class embeddings, has been explored to reduce memory footprint or mitigate forgetting for continual learning scenarios. However, prototype-based methods still suffer from abrupt performance deterioration due to semantic drift and prototype interference. In this study, we propose Contrastive Prototypical Prompt (CPP) and show that task-specific prompt-tuning, when optimized over a contrastive learning objective, can effectively address both obstacles and significantly improve the potency of prototypes. Our experiments demonstrate that CPP excels in four challenging class-incremental learning benchmarks, resulting in 4 absolute improvements over state-of-the-art methods. Moreover, CPP does not require a rehearsal buffer and it largely bridges the performance gap between continual learning and offline joint-learning, showcasing a promising design scheme for continual learning systems under a Transformer architecture.

READ FULL TEXT

page 8

page 19

research
12/03/2021

Contrastive Continual Learning with Feature Propagation

Classical machine learners are designed only to tackle one task without ...
research
09/13/2023

Domain-Aware Augmentations for Unsupervised Online General Continual Learning

Continual Learning has been challenging, especially when dealing with un...
research
07/10/2023

Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning

Federated continual learning (FCL) learns incremental tasks over time fr...
research
11/09/2020

Lifelong Learning Without a Task Oracle

Supervised deep neural networks are known to undergo a sharp decline in ...
research
04/30/2023

DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning

Rehearsal-based approaches are a mainstay of continual learning (CL). Th...
research
03/14/2023

ICICLE: Interpretable Class Incremental Continual Learning

Continual learning enables incremental learning of new tasks without for...
research
02/12/2023

Generalized Few-Shot Continual Learning with Contrastive Mixture of Adapters

The goal of Few-Shot Continual Learning (FSCL) is to incrementally learn...

Please sign up or login with your details

Forgot password? Click here to reset