AdaER: An Adaptive Experience Replay Approach for Continual Lifelong Learning

08/07/2023
by   Xingyu Li, et al.
0

Continual lifelong learning is an machine learning framework inspired by human learning, where learners are trained to continuously acquire new knowledge in a sequential manner. However, the non-stationary nature of streaming training data poses a significant challenge known as catastrophic forgetting, which refers to the rapid forgetting of previously learned knowledge when new tasks are introduced. While some approaches, such as experience replay (ER), have been proposed to mitigate this issue, their performance remains limited, particularly in the class-incremental scenario which is considered natural and highly challenging. In this paper, we present a novel algorithm, called adaptive-experience replay (AdaER), to address the challenge of continual lifelong learning. AdaER consists of two stages: memory replay and memory update. In the memory replay stage, AdaER introduces a contextually-cued memory recall (C-CMR) strategy, which selectively replays memories that are most conflicting with the current input data in terms of both data and task. Additionally, AdaER incorporates an entropy-balanced reservoir sampling (E-BRS) strategy to enhance the performance of the memory buffer by maximizing information entropy. To evaluate the effectiveness of AdaER, we conduct experiments on established supervised continual lifelong learning benchmarks, specifically focusing on class-incremental learning scenarios. The results demonstrate that AdaER outperforms existing continual lifelong learning baselines, highlighting its efficacy in mitigating catastrophic forgetting and improving learning performance.

READ FULL TEXT
research
08/28/2021

Prototypes-Guided Memory Replay for Continual Learning

Continual learning (CL) refers to a machine learning paradigm that using...
research
10/20/2021

Class Incremental Online Streaming Learning

A wide variety of methods have been developed to enable lifelong learnin...
research
05/30/2023

History Repeats: Overcoming Catastrophic Forgetting For Event-Centric Temporal Knowledge Graph Completion

Temporal knowledge graph (TKG) completion models typically rely on havin...
research
09/18/2023

CaT: Balanced Continual Graph Learning with Graph Condensation

Continual graph learning (CGL) is purposed to continuously update a grap...
research
10/14/2022

Sequential Learning Of Neural Networks for Prequential MDL

Minimum Description Length (MDL) provides a framework and an objective f...
research
03/07/2023

Robustness-preserving Lifelong Learning via Dataset Condensation

Lifelong learning (LL) aims to improve a predictive model as the data so...
research
02/07/2023

Towards Robust Inductive Graph Incremental Learning via Experience Replay

Inductive node-wise graph incremental learning is a challenging task due...

Please sign up or login with your details

Forgot password? Click here to reset