Foundational Models for Continual Learning: An Empirical Study of Latent Replay

04/30/2022
by   Oleksiy Ostapenko, et al.
12

Rapid development of large-scale pre-training has resulted in foundation models that can act as effective feature extractors on a variety of downstream tasks and domains. Motivated by this, we study the efficacy of pre-trained vision models as a foundation for downstream continual learning (CL) scenarios. Our goal is twofold. First, we want to understand the compute-accuracy trade-off between CL in the raw-data space and in the latent space of pre-trained encoders. Second, we investigate how the characteristics of the encoder, the pre-training algorithm and data, as well as of the resulting latent space affect CL performance. For this, we compare the efficacy of various pre-trained models in large-scale benchmarking scenarios with a vanilla replay setting applied in the latent and in the raw-data space. Notably, this study shows how transfer, forgetting, task similarity and learning are dependent on the input data characteristics and not necessarily on the CL algorithms. First, we show that under some circumstances reasonable CL performance can readily be achieved with a non-parametric classifier at negligible compute. We then show how models pre-trained on broader data result in better performance for various replay sizes. We explain this with representational similarity and transfer properties of these representations. Finally, we show the effectiveness of self-supervised pre-training for downstream domains that are out-of-distribution as compared to the pre-training domain. We point out and validate several research directions that can further increase the efficacy of latent CL including representation ensembling. The diverse set of datasets used in this study can serve as a compute-efficient playground for further CL research. The codebase is available under https://github.com/oleksost/latent_CL.

READ FULL TEXT

page 8

page 18

page 24

page 27

research
05/19/2022

Continual Pre-Training Mitigates Forgetting in Language and Vision

Pre-trained models are nowadays a fundamental component of machine learn...
research
03/12/2023

Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models

Continual learning (CL) can help pre-trained vision-language models effi...
research
10/23/2022

Language Model Pre-Training with Sparse Latent Typing

Modern large-scale Pre-trained Language Models (PLMs) have achieved trem...
research
08/29/2023

Reprogramming under constraints: Revisiting efficient and reliable transferability of lottery tickets

In the era of foundation models with huge pre-training budgets, the down...
research
05/27/2021

Encoders and Ensembles for Task-Free Continual Learning

We present an architecture that is effective for continual learning in a...
research
05/15/2023

Recyclable Tuning for Continual Pre-training

Continual pre-training is the paradigm where pre-trained language models...
research
04/28/2022

Resource-efficient domain adaptive pre-training for medical images

The deep learning-based analysis of medical images suffers from data sca...

Please sign up or login with your details

Forgot password? Click here to reset