Rethinking Experience Replay: a Bag of Tricks for Continual Learning

10/12/2020
by   Pietro Buzzega, et al.
0

In Continual Learning, a Neural Network is trained on a stream of data whose distribution shifts over time. Under these assumptions, it is especially challenging to improve on classes appearing later in the stream while remaining accurate on previous ones. This is due to the infamous problem of catastrophic forgetting, which causes a quick performance degradation when the classifier focuses on learning new categories. Recent literature proposed various approaches to tackle this issue, often resorting to very sophisticated techniques. In this work, we show that naive rehearsal can be patched to achieve similar performance. We point out some shortcomings that restrain Experience Replay (ER) and propose five tricks to mitigate them. Experiments show that ER, thus enhanced, displays an accuracy gain of 51.2 and 26.9 percentage points on the CIFAR-10 and CIFAR-100 datasets respectively (memory buffer size 1000). As a result, it surpasses current state-of-the-art rehearsal-based methods.

READ FULL TEXT
research
03/29/2021

Distilled Replay: Overcoming Forgetting through Synthetic Samples

Replay strategies are Continual Learning techniques which mitigate catas...
research
06/16/2023

Studying Generalization on Memory-Based Methods in Continual Learning

One of the objectives of Continual Learning is to learn new concepts con...
research
04/24/2021

Class-Incremental Experience Replay for Continual Learning under Concept Drift

Modern machine learning systems need to be able to cope with constantly ...
research
01/06/2023

Architect, Regularize and Replay (ARR): a Flexible Hybrid Approach for Continual Learning

In recent years we have witnessed a renewed interest in machine learning...
research
07/04/2022

It's all About Consistency: A Study on Memory Composition for Replay-Based Methods in Continual Learning

Continual Learning methods strive to mitigate Catastrophic Forgetting (C...
research
04/11/2021

Reducing Representation Drift in Online Continual Learning

We study the online continual learning paradigm, where agents must learn...
research
06/03/2022

Effects of Auxiliary Knowledge on Continual Learning

In Continual Learning (CL), a neural network is trained on a stream of d...

Please sign up or login with your details

Forgot password? Click here to reset