AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning

05/19/2023
by   Runqi Wang, et al.
1

Continual learning aims to enable a model to incrementally learn knowledge from sequentially arrived data. Previous works adopt the conventional classification architecture, which consists of a feature extractor and a classifier. The feature extractor is shared across sequentially arrived tasks or classes, but one specific group of weights of the classifier corresponding to one new class should be incrementally expanded. Consequently, the parameters of a continual learner gradually increase. Moreover, as the classifier contains all historical arrived classes, a certain size of the memory is usually required to store rehearsal data to mitigate classifier bias and catastrophic forgetting. In this paper, we propose a non-incremental learner, named AttriCLIP, to incrementally extract knowledge of new classes or tasks. Specifically, AttriCLIP is built upon the pre-trained visual-language model CLIP. Its image encoder and text encoder are fixed to extract features from both images and text. Text consists of a category name and a fixed number of learnable parameters which are selected from our designed attribute word bank and serve as attributes. As we compute the visual and textual similarity for classification, AttriCLIP is a non-incremental learner. The attribute prompts, which encode the common knowledge useful for classification, can effectively mitigate the catastrophic forgetting and avoid constructing a replay memory. We evaluate our AttriCLIP and compare it with CLIP-based and previous state-of-the-art continual learning methods in realistic settings with domain-shift and long-sequence learning. The results show that our method performs favorably against previous state-of-the-arts. The implementation code can be available at https://github.com/bhrqw/AttriCLIP.

READ FULL TEXT

page 8

page 10

page 14

research
03/21/2020

Adversarial Continual Learning

Continual learning aims to learn new tasks without forgetting previously...
research
04/20/2020

Generative Feature Replay For Class-Incremental Learning

Humans are capable of learning new tasks without forgetting previous one...
research
08/24/2023

Masked Autoencoders are Efficient Class Incremental Learners

Class Incremental Learning (CIL) aims to sequentially learn new classes ...
research
02/07/2023

Deep Class-Incremental Learning: A Survey

Deep models, e.g., CNNs and Vision Transformers, have achieved impressiv...
research
09/11/2023

Class-Incremental Grouping Network for Continual Audio-Visual Learning

Continual learning is a challenging problem in which models need to be t...
research
02/12/2023

Generalized Few-Shot Continual Learning with Contrastive Mixture of Adapters

The goal of Few-Shot Continual Learning (FSCL) is to incrementally learn...
research
06/09/2021

Match What Matters: Generative Implicit Feature Replay for Continual Learning

Neural networks are prone to catastrophic forgetting when trained increm...

Please sign up or login with your details

Forgot password? Click here to reset