Practical Recommendations for Replay-based Continual Learning Methods

03/19/2022
by   Gabriele Merlin, et al.
4

Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge. Several approaches have been developed in the literature to tackle the Continual Learning challenge. Among them, Replay approaches have empirically proved to be the most effective ones. Replay operates by saving some samples in memory which are then used to rehearse knowledge during training in subsequent tasks. However, an extensive comparison and deeper understanding of different replay implementation subtleties is still missing in the literature. The aim of this work is to compare and analyze existing replay-based strategies and provide practical recommendations on developing efficient, effective and generally applicable replay-based strategies. In particular, we investigate the role of the memory size value, different weighting policies and discuss about the impact of data augmentation, which allows reaching better performance with lower memory sizes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2022

Sample Condensation in Online Continual Learning

Online Continual learning is a challenging learning scenario where the m...
research
08/15/2021

An Investigation of Replay-based Approaches for Continual Learning

Continual learning (CL) is a major challenge of machine learning (ML) an...
research
01/06/2023

Architect, Regularize and Replay (ARR): a Flexible Hybrid Approach for Continual Learning

In recent years we have witnessed a renewed interest in machine learning...
research
05/23/2022

KRNet: Towards Efficient Knowledge Replay

The knowledge replay technique has been widely used in many tasks such a...
research
08/11/2023

Cost-effective On-device Continual Learning over Memory Hierarchy with Miro

Continual learning (CL) trains NN models incrementally from a continuous...
research
06/19/2023

Partial Hypernetworks for Continual Learning

Hypernetworks mitigate forgetting in continual learning (CL) by generati...
research
07/15/2022

Improving Task-free Continual Learning by Distributionally Robust Memory Evolution

Task-free continual learning (CL) aims to learn a non-stationary data st...

Please sign up or login with your details

Forgot password? Click here to reset