CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning

11/23/2022
by   James Seale Smith, et al.
0

Computer vision models suffer from a phenomenon known as catastrophic forgetting when learning novel concepts from continuously shifting training data. Typical solutions for this continual learning problem require extensive rehearsal of previously seen data, which increases memory costs and may violate data privacy. Recently, the emergence of large-scale pre-trained vision transformer models has enabled prompting approaches as an alternative to data-rehearsal. These approaches rely on a key-query mechanism to generate prompts and have been found to be highly resistant to catastrophic forgetting in the well-established rehearsal-free continual learning setting. However, the key mechanism of these methods is not trained end-to-end with the task sequence. Our experiments show that this leads to a reduction in their plasticity, hence sacrificing new task accuracy, and inability to benefit from expanded parameter capacity. We instead propose to learn a set of prompt components which are assembled with input-conditioned weights to produce input-conditioned prompts, resulting in a novel attention-based end-to-end key-query scheme. Our experiments show that we outperform the current SOTA method DualPrompt on established benchmarks by as much as 5.4 accuracy. We also outperform the state of art by as much as 6.6 continual learning benchmark which contains both class-incremental and domain-incremental task shifts, corresponding to many practical settings.

READ FULL TEXT
research
03/31/2022

A Closer Look at Rehearsal-Free Continual Learning

Continual learning describes a setting where machine learning models lea...
research
06/03/2019

Continual learning with hypernetworks

Artificial neural networks suffer from catastrophic forgetting when they...
research
03/13/2023

PromptFusion: Decoupling Stability and Plasticity for Continual Learning

Continual learning refers to the capability of continuously learning fro...
research
11/22/2021

FFNB: Forgetting-Free Neural Blocks for Deep Continual Visual Learning

Deep neural networks (DNNs) have recently achieved a great success in co...
research
07/26/2022

S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning

State-of-the-art deep neural networks are still struggling to address th...
research
08/31/2023

Continual Learning From a Stream of APIs

Continual learning (CL) aims to learn new tasks without forgetting previ...
research
07/05/2023

Exploring Continual Learning for Code Generation Models

Large-scale code generation models such as Codex and CodeT5 have achieve...

Please sign up or login with your details

Forgot password? Click here to reset