Batch Model Consolidation: A Multi-Task Model Consolidation Framework

05/25/2023
by   Iordanis Fostiropoulos, et al.
0

In Continual Learning (CL), a model is required to learn a stream of tasks sequentially without significant performance degradation on previously learned tasks. Current approaches fail for a long sequence of tasks from diverse domains and difficulties. Many of the existing CL approaches are difficult to apply in practice due to excessive memory cost or training time, or are tightly coupled to a single device. With the intuition derived from the widely applied mini-batch training, we propose Batch Model Consolidation (BMC) to support more realistic CL under conditions where multiple agents are exposed to a range of tasks. During a regularization phase, BMC trains multiple expert models in parallel on a set of disjoint tasks. Each expert maintains weight similarity to a base model through a stability loss, and constructs a buffer from a fraction of the task's data. During the consolidation phase, we combine the learned knowledge on 'batches' of expert models using a batched consolidation loss in memory data that aggregates all buffers. We thoroughly evaluate each component of our method in an ablation study and demonstrate the effectiveness on standardized benchmark datasets Split-CIFAR-100, Tiny-ImageNet, and the Stream dataset composed of 71 image classification tasks from diverse domains and difficulties. Our method outperforms the next best CL approach by 70 maintain performance at the end of 71 tasks; Our benchmark can be accessed at https://github.com/fostiropoulos/stream_benchmark

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2021

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Continual learning is a realistic learning scenario for AI models. Preva...
research
02/17/2023

New Insights for the Stability-Plasticity Dilemma in Online Continual Learning

The aim of continual learning is to learn new tasks continuously (i.e., ...
research
09/01/2023

New metrics for analyzing continual learners

Deep neural networks have shown remarkable performance when trained on i...
research
06/06/2023

Learning Representations on the Unit Sphere: Application to Online Continual Learning

We use the maximum a posteriori estimation principle for learning repres...
research
10/06/2020

The Effectiveness of Memory Replay in Large Scale Continual Learning

We study continual learning in the large scale setting where tasks in th...
research
06/23/2023

Maintaining Plasticity in Deep Continual Learning

Modern deep-learning systems are specialized to problem settings in whic...

Please sign up or login with your details

Forgot password? Click here to reset