Dark Experience for General Continual Learning: a Strong, Simple Baseline

04/15/2020
by   Pietro Buzzega, et al.
13

Neural networks struggle to learn continuously, as they forget the old knowledge catastrophically whenever the data distribution changes over time. Recently, Continual Learning has inspired a plethora of approaches and evaluation settings; however, the majority of them overlooks the properties of a practical scenario, where the data stream cannot be shaped as a sequence of tasks and offline training is not viable. We work towards General Continual Learning (GCL), where task boundaries blur and the domain and class distributions shift either gradually or suddenly. We address it through Dark Experience Replay, namely matching the network's logits sampled throughout the optimization trajectory, thus promoting consistency with its past. By conducting an extensive analysis on top of standard benchmarks, we show that such a seemingly simple baseline outperforms consolidated approaches and leverages limited resources. To provide a better understanding, we further introduce MNIST-360, a novel GCL evaluation setting.

READ FULL TEXT

page 7

page 8

research
02/19/2020

Using Hindsight to Anchor Past Knowledge in Continual Learning

In continual learning, the learner faces a stream of data whose distribu...
research
11/29/2022

SimCS: Simulation for Online Domain-Incremental Continual Segmentation

Continual Learning is a step towards lifelong intelligence where models ...
research
02/02/2023

Real-Time Evaluation in Online Continual Learning: A New Paradigm

Current evaluations of Continual Learning (CL) methods typically assume ...
research
02/27/2019

Continual Learning with Tiny Episodic Memories

Learning with less supervision is a major challenge in artificial intell...
research
10/22/2020

Continual Learning in Low-rank Orthogonal Subspaces

In continual learning (CL), a learner is faced with a sequence of tasks,...
research
09/28/2022

A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal

Online continual learning (OCL) aims to train neural networks incrementa...
research
03/27/2023

Exploring Continual Learning of Diffusion Models

Diffusion models have achieved remarkable success in generating high-qua...

Please sign up or login with your details

Forgot password? Click here to reset