Continual Learning with Pretrained Backbones by Tuning in the Input Space

06/05/2023
by   Simone Marullo, et al.
0

The intrinsic difficulty in adapting deep learning models to non-stationary environments limits the applicability of neural networks to real-world tasks. This issue is critical in practical supervised learning settings, such as the ones in which a pre-trained model computes projections toward a latent space where different task predictors are sequentially learned over time. As a matter of fact, incrementally fine-tuning the whole model to better adapt to new tasks usually results in catastrophic forgetting, with decreasing performance over the past experiences and losing valuable knowledge from the pre-training stage. In this paper, we propose a novel strategy to make the fine-tuning procedure more effective, by avoiding to update the pre-trained part of the network and learning not only the usual classification head, but also a set of newly-introduced learnable parameters that are responsible for transforming the input data. This process allows the network to effectively leverage the pre-training knowledge and find a good trade-off between plasticity and stability with modest computational efforts, thus especially suitable for on-the-edge settings. Our experiments on four image classification problems in a continual learning setting confirm the quality of the proposed approach when compared to several fine-tuning procedures and to popular continual learning methods.

READ FULL TEXT
research
08/17/2022

DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning

Pre-trained representation is one of the key elements in the success of ...
research
03/30/2023

Practical self-supervised continual learning with continual fine-tuning

Self-supervised learning (SSL) has shown remarkable performance in compu...
research
04/22/2022

Alleviating Representational Shift for Continual Fine-tuning

We study a practical setting of continual learning: fine-tuning on a pre...
research
01/28/2023

Adversarial Learning Networks: Source-free Unsupervised Domain Incremental Learning

This work presents an approach for incrementally updating deep neural ne...
research
09/01/2022

Incremental Online Learning Algorithms Comparison for Gesture and Visual Smart Sensors

Tiny machine learning (TinyML) in IoT systems exploits MCUs as edge devi...
research
04/21/2020

Efficient Adaptation for End-to-End Vision-Based Robotic Manipulation

One of the great promises of robot learning systems is that they will be...
research
06/14/2023

Kalman Filter for Online Classification of Non-Stationary Data

In Online Continual Learning (OCL) a learning system receives a stream o...

Please sign up or login with your details

Forgot password? Click here to reset