Improving Online Continual Learning Performance and Stability with Temporal Ensembles

06/29/2023
by   Albin Soutif--Cormerais, et al.
0

Neural networks are very effective when trained on large datasets for a large number of iterations. However, when they are trained on non-stationary streams of data and in an online fashion, their performance is reduced (1) by the online setup, which limits the availability of data, (2) due to catastrophic forgetting because of the non-stationary nature of the data. Furthermore, several recent works (Caccia et al., 2022; Lange et al., 2023) arXiv:2205.13452 showed that replay methods used in continual learning suffer from the stability gap, encountered when evaluating the model continually (rather than only on task boundaries). In this article, we study the effect of model ensembling as a way to improve performance and stability in online continual learning. We notice that naively ensembling models coming from a variety of training tasks increases the performance in online continual learning considerably. Starting from this observation, and drawing inspirations from semi-supervised learning ensembling methods, we use a lightweight temporal ensemble that computes the exponential moving average of the weights (EMA) at test time, and show that it can drastically increase the performance and stability when used in combination with several methods from the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2022

Continual Normalization: Rethinking Batch Normalization for Online Continual Learning

Existing continual learning methods use Batch Normalization (BN) to faci...
research
09/02/2020

Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams

As learning from non-stationary streams of data has been proven a challe...
research
10/12/2022

Improving information retention in large scale online continual learning

Given a stream of data sampled from non-stationary distributions, online...
research
03/20/2019

Online continual learning with no task boundaries

Continual learning is the ability of an agent to learn online with a non...
research
12/10/2018

Task-Free Continual Learning

Methods proposed in the literature towards continual deep learning typic...
research
01/18/2022

Continual Learning for CTR Prediction: A Hybrid Approach

Click-through rate(CTR) prediction is a core task in cost-per-click(CPC)...
research
06/04/2021

A Procedural World Generation Framework for Systematic Evaluation of Continual Learning

Several families of continual learning techniques have been proposed to ...

Please sign up or login with your details

Forgot password? Click here to reset