Gradient Based Memory Editing for Task-Free Continual Learning

06/27/2020
by   Xisen Jin, et al.
0

Prior work on continual learning often operate in a "task-aware" manner, by assuming that the task boundaries and identifies of the data instances are known at all times. While in practice, it is rarely the case that such information are exposed to the methods (i.e., thus called "task-free")–a setting that is relatively underexplored. Recent attempts on task-free continual learning build on previous memory replay methods and focus on developing memory management strategies such that model performance over priorly seen instances can be best retained. In this paper, looking from a complementary angle, we propose a principled approach to "edit" stored examples which aims to carry more updated information from the data stream in the memory. We use gradient updates to edit stored examples so that they are more likely to be forgotten in future updates. Experiments on five benchmark datasets show the proposed method can be seamlessly combined with baselines to significantly improve the performance. Code has been released at https://github.com/INK-USC/GMED.

READ FULL TEXT
research
10/17/2021

Online Continual Learning Via Candidates Voting

Continual learning in online scenario aims to learn a sequence of new ta...
research
08/31/2020

Adversarial Shapley Value Experience Replay for Task-Free Continual Learning

Continual learning is a branch of deep learning that seeks to strike a b...
research
07/15/2022

Improving Task-free Continual Learning by Distributionally Robust Memory Evolution

Task-free continual learning (CL) aims to learn a non-stationary data st...
research
03/31/2021

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Continual learning is a realistic learning scenario for AI models. Preva...
research
07/13/2022

D-CBRS: Accounting For Intra-Class Diversity in Continual Learning

Continual learning – accumulating knowledge from a sequence of learning ...
research
12/10/2018

Task-Free Continual Learning

Methods proposed in the literature towards continual deep learning typic...
research
06/28/2021

Unsupervised Continual Learning via Self-Adaptive Deep Clustering Approach

Unsupervised continual learning remains a relatively uncharted territory...

Please sign up or login with your details

Forgot password? Click here to reset