Principal Gradient Direction and Confidence Reservoir Sampling for Continual Learning

08/21/2021
by   Zhiyi Chen, et al.
0

Task-free online continual learning aims to alleviate catastrophic forgetting of the learner on a non-iid data stream. Experience Replay (ER) is a SOTA continual learning method, which is broadly used as the backbone algorithm for other replay-based methods. However, the training strategy of ER is too simple to take full advantage of replayed examples and its reservoir sampling strategy is also suboptimal. In this work, we propose a general proximal gradient framework so that ER can be viewed as a special case. We further propose two improvements accordingly: Principal Gradient Direction (PGD) and Confidence Reservoir Sampling (CRS). In Principal Gradient Direction, we optimize a target gradient that not only represents the major contribution of past gradients, but also retains the new knowledge of the current gradient. We then present Confidence Reservoir Sampling for maintaining a more informative memory buffer based on a margin-based metric that measures the value of stored examples. Experiments substantiate the effectiveness of both our improvements and our new algorithm consistently boosts the performance of MIR-replay, a SOTA ER-based method: our algorithm increases the average accuracy up to 7.9 forgetting up to 15.4

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2021

Distilled Replay: Overcoming Forgetting through Synthetic Samples

Replay strategies are Continual Learning techniques which mitigate catas...
research
02/19/2020

Using Hindsight to Anchor Past Knowledge in Continual Learning

In continual learning, the learner faces a stream of data whose distribu...
research
12/09/2021

Gradient-matching coresets for continual learning

We devise a coreset selection method based on the idea of gradient match...
research
08/03/2023

Improving Replay Sample Selection and Storage for Less Forgetting in Continual Learning

Continual learning seeks to enable deep learners to train on a series of...
research
04/10/2022

Information-theoretic Online Memory Selection for Continual Learning

A challenging problem in task-free continual learning is the online sele...
research
07/13/2022

D-CBRS: Accounting For Intra-Class Diversity in Continual Learning

Continual learning – accumulating knowledge from a sequence of learning ...
research
07/15/2022

Improving Task-free Continual Learning by Distributionally Robust Memory Evolution

Task-free continual learning (CL) aims to learn a non-stationary data st...

Please sign up or login with your details

Forgot password? Click here to reset