Computationally Budgeted Continual Learning: What Does Matter?

03/20/2023
by   Ameya Prabhu, et al.
0

Continual Learning (CL) aims to sequentially train models on streams of incoming data that vary in distribution by preserving previous knowledge while adapting to new data. Current CL literature focuses on restricted access to previously seen data, while imposing no constraints on the computational budget for training. This is unreasonable for applications in-the-wild, where systems are primarily constrained by computational and time budgets, not storage. We revisit this problem with a large-scale benchmark and analyze the performance of traditional CL approaches in a compute-constrained setting, where effective memory samples used in training can be implicitly restricted as a consequence of limited computation. We conduct experiments evaluating various CL sampling strategies, distillation losses, and partial fine-tuning on two large-scale datasets, namely ImageNet2K and Continual Google Landmarks V2 in data incremental, class incremental, and time incremental settings. Through extensive experiments amounting to a total of over 1500 GPU-hours, we find that, under compute-constrained setting, traditional CL approaches, with no exception, fail to outperform a simple minimal baseline that samples uniformly from memory. Our conclusions are consistent in a different number of stream time steps, e.g., 20 to 200, and under several computational budgets. This suggests that most existing CL methods are particularly too computationally expensive for realistic budgeted deployment. Code for this project is available at: https://github.com/drimpossible/BudgetCL.

READ FULL TEXT
research
02/02/2023

Real-Time Evaluation in Online Continual Learning: A New Paradigm

Current evaluations of Continual Learning (CL) methods typically assume ...
research
05/16/2023

Online Continual Learning Without the Storage Constraint

Online continual learning (OCL) research has primarily focused on mitiga...
research
08/20/2021

Online Continual Learning with Natural Distribution Shifts: An Empirical Study with Visual Data

Continual learning is the problem of learning and retaining knowledge th...
research
09/11/2021

Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers

This paper investigates continual learning for semantic parsing. In this...
research
10/21/2021

HCV: Hierarchy-Consistency Verification for Incremental Implicitly-Refined Classification

Human beings learn and accumulate hierarchical knowledge over their life...
research
05/16/2023

Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?

We revisit the common practice of evaluating adaptation of Online Contin...
research
03/10/2023

Lifelong Machine Learning Potentials

Machine learning potentials (MLPs) trained on accurate quantum chemical ...

Please sign up or login with your details

Forgot password? Click here to reset