I2I: Initializing Adapters with Improvised Knowledge

04/04/2023
by   Tejas Srinivasan, et al.
5

Adapters present a promising solution to the catastrophic forgetting problem in continual learning. However, training independent Adapter modules for every new task misses an opportunity for cross-task knowledge transfer. We propose Improvise to Initialize (I2I), a continual learning algorithm that initializes Adapters for incoming tasks by distilling knowledge from previously-learned tasks' Adapters. We evaluate I2I on CLiMB, a multimodal continual learning benchmark, by conducting experiments on sequences of visual question answering tasks. Adapters trained with I2I consistently achieve better task accuracy than independently-trained Adapters, demonstrating that our algorithm facilitates knowledge transfer between task Adapters. I2I also results in better cross-task knowledge transfer than the state-of-the-art AdapterFusion without incurring the associated parametric cost.

READ FULL TEXT

page 2

page 6

research
06/10/2019

Psycholinguistics meets Continual Learning: Measuring Catastrophic Forgetting in Visual Question Answering

We study the issue of catastrophic forgetting in the context of neural m...
research
06/18/2022

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

Current state-of-the-art vision-and-language models are evaluated on tas...
research
11/30/2022

Continual Learning with Distributed Optimization: Does CoCoA Forget?

We focus on the continual learning problem where the tasks arrive sequen...
research
03/21/2023

Continual Learning in the Presence of Spurious Correlation

Most continual learning (CL) algorithms have focused on tackling the sta...
research
12/03/2021

Learning Curves for Sequential Training of Neural Networks: Self-Knowledge Transfer and Forgetting

Sequential training from task to task is becoming one of the major objec...
research
11/15/2021

Continual Learning via Local Module Composition

Modularity is a compelling solution to continual learning (CL), the prob...
research
07/11/2023

Towards Robust and Efficient Continual Language Learning

As the application space of language models continues to evolve, a natur...

Please sign up or login with your details

Forgot password? Click here to reset