Continual Learning with Tiny Episodic Memories

02/27/2019
by   Arslan Chaudhry, et al.
0

Learning with less supervision is a major challenge in artificial intelligence. One sensible approach to decrease the amount of supervision is to leverage prior experience and transfer knowledge from tasks seen in the past. However, a necessary condition for a successful transfer is the ability to remember how to perform previous tasks. The Continual Learning (CL) setting, whereby an agent learns from a stream of tasks without seeing any example twice, is an ideal framework to investigate how to accrue such knowledge. In this work, we consider supervised learning tasks and methods that leverage a very small episodic memory for continual learning. Through an extensive empirical analysis across four benchmark datasets adapted to CL, we observe that a very simple baseline, which jointly trains on both examples from the current task as well as examples stored in the memory, outperforms state-of-the-art CL approaches with and without episodic memory. Surprisingly, repeated learning over tiny episodic memories does not harm generalization on past tasks, as joint training on data from subsequent tasks acts like a data dependent regularizer. We discuss and evaluate different approaches to write into the memory. Most notably, reservoir sampling works remarkably well across the board, except when the memory size is extremely small. In this case, writing strategies that guarantee an equal representation of all classes work better. Overall, these methods should be considered as a strong baseline candidate when benchmarking new CL approaches

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2017

Gradient Episodic Memory for Continual Learning

One major obstacle towards AI is the poor ability of models to solve new...
research
04/15/2020

Dark Experience for General Continual Learning: a Strong, Simple Baseline

Neural networks struggle to learn continuously, as they forget the old k...
research
04/22/2022

Memory Bounds for Continual Learning

Continual learning, or lifelong learning, is a formidable current challe...
research
02/02/2023

Real-Time Evaluation in Online Continual Learning: A New Paradigm

Current evaluations of Continual Learning (CL) methods typically assume ...
research
09/28/2022

A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal

Online continual learning (OCL) aims to train neural networks incrementa...
research
09/25/2022

Exploring Example Influence in Continual Learning

Continual Learning (CL) sequentially learns new tasks like human beings,...
research
01/09/2023

CaSpeR: Latent Spectral Regularization for Continual Learning

While biological intelligence grows organically as new knowledge is gath...

Please sign up or login with your details

Forgot password? Click here to reset