DeepAI AI Chat
Log In Sign Up

Adversarial Shapley Value Experience Replay for Task-Free Continual Learning

08/31/2020
by   Zheda Mai, et al.
14

Continual learning is a branch of deep learning that seeks to strike a balance between learning stability and plasticity. In this paper, we specifically focus on the task-free setting where data are streamed online without task metadata and clear task boundaries. A simple and highly effective algorithm class for this setting is known as Experience Replay (ER) that selectively stores data samples from previous experience and leverages them to interleave memory-based and online batch learning updates. Recent advances in ER have proposed novel methods for scoring which samples to store in memory and which memory samples to interleave with online data during learning updates. In this paper, we contribute a novel Adversarial Shapley value ER (ASER) method that scores memory data samples according to their ability to preserve latent decision boundaries for previously observed classes (to maintain learning stability and avoid forgetting) while interfering with latent decision boundaries of current classes being learned (to encourage plasticity and optimal learning of new class boundaries). Overall, we observe that ASER provides competitive or improved performance on a variety of datasets compared to state-of-the-art ER-based continual learning methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/23/2022

Optimizing Class Distribution in Memory for Multi-Label Online Continual Learning

Online continual learning, especially when task identities and task boun...
07/11/2020

Batch-level Experience Replay with Review for Continual Learning

Continual learning is a branch of deep learning that seeks to strike a b...
03/08/2022

New Insights on Reducing Abrupt Representation Change in Online Continual Learning

In the online continual learning paradigm, agents must learn from a chan...
06/27/2020

Gradient Based Memory Editing for Task-Free Continual Learning

Prior work on continual learning often operate in a "task-aware" manner,...
03/20/2023

Offline-Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement

This paper investigates a new, practical, but challenging problem named ...
04/11/2021

Reducing Representation Drift in Online Continual Learning

We study the online continual learning paradigm, where agents must learn...
06/28/2021

Unsupervised Continual Learning via Self-Adaptive Deep Clustering Approach

Unsupervised continual learning remains a relatively uncharted territory...