DeepAI AI Chat
Log In Sign Up

Adversarial Shapley Value Experience Replay for Task-Free Continual Learning

by   Zheda Mai, et al.

Continual learning is a branch of deep learning that seeks to strike a balance between learning stability and plasticity. In this paper, we specifically focus on the task-free setting where data are streamed online without task metadata and clear task boundaries. A simple and highly effective algorithm class for this setting is known as Experience Replay (ER) that selectively stores data samples from previous experience and leverages them to interleave memory-based and online batch learning updates. Recent advances in ER have proposed novel methods for scoring which samples to store in memory and which memory samples to interleave with online data during learning updates. In this paper, we contribute a novel Adversarial Shapley value ER (ASER) method that scores memory data samples according to their ability to preserve latent decision boundaries for previously observed classes (to maintain learning stability and avoid forgetting) while interfering with latent decision boundaries of current classes being learned (to encourage plasticity and optimal learning of new class boundaries). Overall, we observe that ASER provides competitive or improved performance on a variety of datasets compared to state-of-the-art ER-based continual learning methods.


page 1

page 2

page 3

page 4


Optimizing Class Distribution in Memory for Multi-Label Online Continual Learning

Online continual learning, especially when task identities and task boun...

Batch-level Experience Replay with Review for Continual Learning

Continual learning is a branch of deep learning that seeks to strike a b...

New Insights on Reducing Abrupt Representation Change in Online Continual Learning

In the online continual learning paradigm, agents must learn from a chan...

Gradient Based Memory Editing for Task-Free Continual Learning

Prior work on continual learning often operate in a "task-aware" manner,...

Offline-Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement

This paper investigates a new, practical, but challenging problem named ...

Reducing Representation Drift in Online Continual Learning

We study the online continual learning paradigm, where agents must learn...

Unsupervised Continual Learning via Self-Adaptive Deep Clustering Approach

Unsupervised continual learning remains a relatively uncharted territory...