It's all About Consistency: A Study on Memory Composition for Replay-Based Methods in Continual Learning

07/04/2022
by   Julio Hurtado, et al.
2

Continual Learning methods strive to mitigate Catastrophic Forgetting (CF), where knowledge from previously learned tasks is lost when learning a new one. Among those algorithms, some maintain a subset of samples from previous tasks when training. These samples are referred to as a memory. These methods have shown outstanding performance while being conceptually simple and easy to implement. Yet, despite their popularity, little has been done to understand which elements to be included into the memory. Currently, this memory is often filled via random sampling with no guiding principles that may aid in retaining previous knowledge. In this work, we propose a criterion based on the learning consistency of a sample called Consistency AWare Sampling (CAWS). This criterion prioritizes samples that are easier to learn by deep networks. We perform studies on three different memory-based methods: AGEM, GDumb, and Experience Replay, on MNIST, CIFAR-10 and CIFAR-100 datasets. We show that using the most consistent elements yields performance gains when constrained by a compute budget; when under no such constrain, random sampling is a strong baseline. However, using CAWS on Experience Replay yields improved performance over the random baseline. Finally, we show that CAWS achieves similar results to a popular memory selection method while requiring significantly less computational resources.

READ FULL TEXT

page 5

page 14

page 15

research
08/11/2019

Online Continual Learning with Maximally Interfered Retrieval

Continual learning, the setting where a learning agent is faced with a n...
research
05/23/2023

Continual Learning with Strong Experience Replay

Continual Learning (CL) aims at incrementally learning new tasks without...
research
10/12/2020

Rethinking Experience Replay: a Bag of Tricks for Continual Learning

In Continual Learning, a Neural Network is trained on a stream of data w...
research
12/31/2021

Revisiting Experience Replay: Continual Learning by Adaptively Tuning Task-wise Relationship

Continual learning requires models to learn new tasks while maintaining ...
research
06/16/2023

Studying Generalization on Memory-Based Methods in Continual Learning

One of the objectives of Continual Learning is to learn new concepts con...
research
09/18/2023

CaT: Balanced Continual Graph Learning with Graph Condensation

Continual graph learning (CGL) is purposed to continuously update a grap...
research
11/19/2019

Online Learned Continual Compression with Stacked Quantization Module

We introduce and study the problem of Online Continual Compression, wher...

Please sign up or login with your details

Forgot password? Click here to reset