Adapter Incremental Continual Learning of Efficient Audio Spectrogram Transformers

Continual learning involves training neural networks incrementally for new tasks while retaining the knowledge of previous tasks. However, efficiently fine-tuning the model for sequential tasks with minimal computational resources remains a challenge. In this paper, we propose Task Incremental Continual Learning (TI-CL) of audio classifiers with both parameter-efficient and compute-efficient Audio Spectrogram Transformers (AST). To reduce the trainable parameters without performance degradation for TI-CL, we compare several Parameter Efficient Transfer (PET) methods and propose AST with Convolutional Adapters for TI-CL, which has less than 5 fine-tuned counterparts. To reduce the computational complexity, we introduce a novel Frequency-Time factorized Attention (FTA) method that replaces the traditional self-attention in transformers for audio spectrograms. FTA achieves competitive performance with only a factor of the computations required by Global Self-Attention (GSA). Finally, we formulate our method for TI-CL, called Adapter Incremental Continual Learning (AI-CL), as a combination of the "parameter-efficient" Convolutional Adapter and the "compute-efficient" FTA. Experiments on ESC-50, SpeechCommandsV2 (SCv2), and Audio-Visual Event (AVE) benchmarks show that our proposed method prevents catastrophic forgetting in TI-CL while maintaining a lower computational budget.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2023

GateON: an unsupervised method for large scale continual learning

The objective of continual learning (CL) is to learn tasks sequentially ...
research
08/17/2022

DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning

Pre-trained representation is one of the key elements in the success of ...
research
02/17/2020

Residual Continual Learning

We propose a novel continual learning method called Residual Continual L...
research
03/22/2022

Meta-attention for ViT-backed Continual Learning

Continual learning is a longstanding research topic due to its crucial r...
research
07/16/2021

Continual Learning for Automated Audio Captioning Using The Learning Without Forgetting Approach

Automated audio captioning (AAC) is the task of automatically creating t...
research
05/31/2021

A study on the plasticity of neural networks

One aim shared by multiple settings, such as continual learning or trans...
research
04/15/2021

Continual Learning for Fake Audio Detection

Fake audio attack becomes a major threat to the speaker verification sys...

Please sign up or login with your details

Forgot password? Click here to reset