Do Pre-trained Models Benefit Equally in Continual Learning?

10/27/2022
by   Kuan-Ying Lee, et al.
0

Existing work on continual learning (CL) is primarily devoted to developing algorithms for models trained from scratch. Despite their encouraging performance on contrived benchmarks, these algorithms show dramatic performance drops in real-world scenarios. Therefore, this paper advocates the systematic introduction of pre-training to CL, which is a general recipe for transferring knowledge to downstream tasks but is substantially missing in the CL community. Our investigation reveals the multifaceted complexity of exploiting pre-trained models for CL, along three different axes, pre-trained models, CL algorithms, and CL scenarios. Perhaps most intriguingly, improvements in CL algorithms from pre-training are very inconsistent an underperforming algorithm could become competitive and even state-of-the-art when all algorithms start from a pre-trained model. This indicates that the current paradigm, where all CL methods are compared in from-scratch training, is not well reflective of the true CL objective and desired progress. In addition, we make several other important observations, including that CL algorithms that exert less regularization benefit more from a pre-trained model; and that a stronger pre-trained model such as CLIP does not guarantee a better improvement. Based on these findings, we introduce a simple yet effective baseline that employs minimum regularization and leverages the more beneficial pre-trained model, coupled with a two-stage training pipeline. We recommend including this strong baseline in the future development of CL algorithms, due to its demonstrated state-of-the-art performance.

READ FULL TEXT
research
09/13/2023

PILOT: A Pre-Trained Model-Based Continual Learning Toolbox

While traditional machine learning can effectively tackle a wide range o...
research
03/09/2023

SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model

The goal of continual learning is to improve the performance of recognit...
research
03/13/2023

Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need

Class-incremental learning (CIL) aims to adapt to emerging new classes w...
research
07/17/2023

An Empirical Investigation of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration

In the realm of out-of-distribution generalization tasks, finetuning has...
research
08/30/2022

Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models

Deep generative models (DGMs) are data-eager. Essentially, it is because...
research
03/17/2023

A Unified Continual Learning Framework with General Parameter-Efficient Tuning

The "pre-training → downstream adaptation" presents both new opportuniti...
research
12/24/2022

Boosting Out-of-Distribution Detection with Multiple Pre-trained Models

Out-of-Distribution (OOD) detection, i.e., identifying whether an input ...

Please sign up or login with your details

Forgot password? Click here to reset