Adversarial Shapley Value Experience Replay for Task-Free Continual Learning

by   Zheda Mai, et al.

Continual learning is a branch of deep learning that seeks to strike a balance between learning stability and plasticity. In this paper, we specifically focus on the task-free setting where data are streamed online without task metadata and clear task boundaries. A simple and highly effective algorithm class for this setting is known as Experience Replay (ER) that selectively stores data samples from previous experience and leverages them to interleave memory-based and online batch learning updates. Recent advances in ER have proposed novel methods for scoring which samples to store in memory and which memory samples to interleave with online data during learning updates. In this paper, we contribute a novel Adversarial Shapley value ER (ASER) method that scores memory data samples according to their ability to preserve latent decision boundaries for previously observed classes (to maintain learning stability and avoid forgetting) while interfering with latent decision boundaries of current classes being learned (to encourage plasticity and optimal learning of new class boundaries). Overall, we observe that ASER provides competitive or improved performance on a variety of datasets compared to state-of-the-art ER-based continual learning methods.



There are no comments yet.


page 1

page 2

page 3

page 4


Batch-level Experience Replay with Review for Continual Learning

Continual learning is a branch of deep learning that seeks to strike a b...

Online Continual Learning with Maximally Interfered Retrieval

Continual learning, the setting where a learning agent is faced with a n...

Reducing Representation Drift in Online Continual Learning

We study the online continual learning paradigm, where agents must learn...

Gradient Based Memory Editing for Task-Free Continual Learning

Prior work on continual learning often operate in a "task-aware" manner,...

Unsupervised Continual Learning via Self-Adaptive Deep Clustering Approach

Unsupervised continual learning remains a relatively uncharted territory...

Task-Free Continual Learning

Methods proposed in the literature towards continual deep learning typic...

Online Learned Continual Compression with Stacked Quantization Module

We introduce and study the problem of Online Continual Compression, wher...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.