GRASP: A Rehearsal Policy for Efficient Online Continual Learning

by   Md Yousuf Harun, et al.

Continual learning (CL) in deep neural networks (DNNs) involves incrementally accumulating knowledge in a DNN from a growing data stream. A major challenge in CL is that non-stationary data streams cause catastrophic forgetting of previously learned abilities. Rehearsal is a popular and effective way to mitigate this problem, which is storing past observations in a buffer and mixing them with new observations during learning. This leads to a question: Which stored samples should be selected for rehearsal? Choosing samples that are best for learning, rather than simply selecting them at random, could lead to significantly faster learning. For class incremental learning, prior work has shown that a simple class balanced random selection policy outperforms more sophisticated methods. Here, we revisit this question by exploring a new sample selection policy called GRASP. GRASP selects the most prototypical (class representative) samples first and then gradually selects less prototypical (harder) examples to update the DNN. GRASP has little additional compute or memory overhead compared to uniform selection, enabling it to scale to large datasets. We evaluate GRASP and other policies by conducting CL experiments on the large-scale ImageNet-1K and Places-LT image classification datasets. GRASP outperforms all other rehearsal policies. Beyond vision, we also demonstrate that GRASP is effective for CL on five text classification datasets.


page 1

page 2

page 3

page 4


Selective Replay Enhances Learning in Online Continual Analogical Reasoning

In continual learning, a system learns from non-stationary data streams ...

Online Coreset Selection for Rehearsal-based Continual Learning

A dataset is a shred of crucial evidence to describe a task. However, ea...

Continual Prune-and-Select: Class-incremental learning with specialized subnetworks

The human brain is capable of learning tasks sequentially mostly without...

Online Continual Learning in Image Classification: An Empirical Survey

Online continual learning for image classification studies the problem o...

Adversarial Learning Networks: Source-free Unsupervised Domain Incremental Learning

This work presents an approach for incrementally updating deep neural ne...

Sequential Learning Of Neural Networks for Prequential MDL

Minimum Description Length (MDL) provides a framework and an objective f...

Please sign up or login with your details

Forgot password? Click here to reset