DeepAI AI Chat
Log In Sign Up

Efficient Continual Learning Ensembles in Neural Network Subspaces

by   Thang Doan, et al.

A growing body of research in continual learning focuses on the catastrophic forgetting problem. While many attempts have been made to alleviate this problem, the majority of the methods assume a single model in the continual learning setup. In this work, we question this assumption and show that employing ensemble models can be a simple yet effective method to improve continual performance. However, the training and inference cost of ensembles can increase linearly with the number of models. Motivated by this limitation, we leverage the recent advances in the deep learning optimization literature, such as mode connectivity and neural network subspaces, to derive a new method that is both computationally advantageous and can outperform the state-of-the-art continual learning algorithms.


page 2

page 8

page 26


Variational Continual Learning

This paper develops variational continual learning (VCL), a simple but g...

Linear Mode Connectivity in Multitask and Continual Learning

Continual (sequential) training and multitask (simultaneous) training ar...

Neural Architecture for Online Ensemble Continual Learning

Continual learning with an increasing number of classes is a challenging...

CoDeC: Communication-Efficient Decentralized Continual Learning

Training at the edge utilizes continuously evolving data generated at di...

Repairing Neural Networks by Leaving the Right Past Behind

Prediction failures of machine learning models often arise from deficien...

Interpolating Compressed Parameter Subspaces

Inspired by recent work on neural subspaces and mode connectivity, we re...