Introducing Language Guidance in Prompt-based Continual Learning

Continual Learning aims to learn a single model on a sequence of tasks without having access to data from previous tasks. The biggest challenge in the domain still remains catastrophic forgetting: a loss in performance on seen classes of earlier tasks. Some existing methods rely on an expensive replay buffer to store a chunk of data from previous tasks. This, while promising, becomes expensive when the number of tasks becomes large or data can not be stored for privacy reasons. As an alternative, prompt-based methods have been proposed that store the task information in a learnable prompt pool. This prompt pool instructs a frozen image encoder on how to solve each task. While the model faces a disjoint set of classes in each task in this setting, we argue that these classes can be encoded to the same embedding space of a pre-trained language encoder. In this work, we propose Language Guidance for Prompt-based Continual Learning (LGCL) as a plug-in for prompt-based methods. LGCL is model agnostic and introduces language guidance at the task level in the prompt pool and at the class level on the output feature of the vision encoder. We show with extensive experimentation that LGCL consistently improves the performance of prompt-based continual learning methods to set a new state-of-the art. LGCL achieves these performance improvements without needing any additional learnable parameters.

READ FULL TEXT

page 1

page 4

research
04/07/2020

Class-Agnostic Continual Learning of Alternating Languages and Domains

Continual Learning has been often framed as the problem of training a mo...
research
11/18/2021

GCR: Gradient Coreset Based Replay Buffer Selection For Continual Learning

Continual learning (CL) aims to develop techniques by which a single mod...
research
05/18/2021

ACAE-REMIND for Online Continual Learning with Compressed Feature Replay

Online continual learning aims to learn from a non-IID stream of data fr...
research
03/26/2023

Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning

In Continual learning (CL) balancing effective adaptation while combatin...
research
01/29/2022

Continual Learning with Recursive Gradient Optimization

Learning multiple tasks sequentially without forgetting previous knowled...
research
11/03/2022

Continual Learning of Neural Machine Translation within Low Forgetting Risk Regions

This paper considers continual learning of large-scale pretrained neural...
research
05/26/2022

Continual Learning for Visual Search with Backward Consistent Feature Embedding

In visual search, the gallery set could be incrementally growing and add...

Please sign up or login with your details

Forgot password? Click here to reset