POP: Prompt Of Prompts for Continual Learning

06/14/2023
by   Zhiyuan Hu, et al.
0

Continual learning (CL) has attracted increasing attention in the recent past. It aims to mimic the human ability to learn new concepts without catastrophic forgetting. While existing CL methods accomplish this to some extent, they are still prone to semantic drift of the learned feature space. Foundation models, which are endowed with a robust feature representation, learned from very large datasets, provide an interesting substrate for the solution of the CL problem. Recent work has also shown that they can be adapted to specific tasks by prompt tuning techniques that leave the generality of the representation mostly unscathed. An open question is, however, how to learn both prompts that are task specific and prompts that are global, i.e. capture cross-task information. In this work, we propose the Prompt Of Prompts (POP) model, which addresses this goal by progressively learning a group of task-specified prompts and a group of global prompts, denoted as POP, to integrate information from the former. We show that a foundation model equipped with POP learning is able to outperform classic CL methods by a significant margin. Moreover, as prompt tuning only requires a small set of training samples, POP is able to perform CL in the few-shot setting, while still outperforming competing methods trained on the entire dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2020

Insights from the Future for Continual Learning

Continual learning aims to learn tasks sequentially, with (often severe)...
research
04/03/2023

Knowledge Accumulation in Continually Learned Representations and the Issue of Feature Forgetting

By default, neural networks learn on all training data at once. When suc...
research
07/19/2022

Don't Stop Learning: Towards Continual Learning for the CLIP Model

The Contrastive Language-Image Pre-training (CLIP) Model is a recently p...
research
11/03/2021

A Meta-Learned Neuron model for Continual Learning

Continual learning is the ability to acquire new knowledge without forge...
research
03/12/2023

Towards General Purpose Medical AI: Continual Learning Medical Foundation Model

Inevitable domain and task discrepancies in real-world scenarios can imp...

Please sign up or login with your details

Forgot password? Click here to reset