Navigating Memory Construction by Global Pseudo-Task Simulation for Continual Learning

10/16/2022
by   Yejia Liu, et al.
0

Continual learning faces a crucial challenge of catastrophic forgetting. To address this challenge, experience replay (ER) that maintains a tiny subset of samples from previous tasks has been commonly used. Existing ER works usually focus on refining the learning objective for each task with a static memory construction policy. In this paper, we formulate the dynamic memory construction in ER as a combinatorial optimization problem, which aims at directly minimizing the global loss across all experienced tasks. We first apply three tactics to solve the problem in the offline setting as a starting point. To provide an approximate solution to this problem in the online continual learning setting, we further propose the Global Pseudo-task Simulation (GPS), which mimics future catastrophic forgetting of the current task by permutation. Our empirical results and analyses suggest that the GPS consistently improves accuracy across four commonly used vision benchmarks. We have also shown that our GPS can serve as the unified framework for integrating various memory construction policies in existing ER works.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2022

Saliency-Augmented Memory Completion for Continual Learning

Continual Learning is considered a key step toward next-generation Artif...
research
08/03/2023

Improving Replay Sample Selection and Storage for Less Forgetting in Continual Learning

Continual learning seeks to enable deep learners to train on a series of...
research
10/11/2022

Toward Sustainable Continual Learning: Detection and Knowledge Repurposing of Similar Tasks

Most existing works on continual learning (CL) focus on overcoming the c...
research
04/29/2023

The Ideal Continual Learner: An Agent That Never Forgets

The goal of continual learning is to find a model that solves multiple l...
research
04/11/2021

Reducing Representation Drift in Online Continual Learning

We study the online continual learning paradigm, where agents must learn...
research
05/30/2023

Class Conditional Gaussians for Continual Learning

Dealing with representation shift is one of the main problems in online ...

Please sign up or login with your details

Forgot password? Click here to reset