Centroid Distance Distillation for Effective Rehearsal in Continual Learning

03/06/2023
by   Daofeng Liu, et al.
0

Rehearsal, retraining on a stored small data subset of old tasks, has been proven effective in solving catastrophic forgetting in continual learning. However, due to the sampled data may have a large bias towards the original dataset, retraining them is susceptible to driving continual domain drift of old tasks in feature space, resulting in forgetting. In this paper, we focus on tackling the continual domain drift problem with centroid distance distillation. First, we propose a centroid caching mechanism for sampling data points based on constructed centroids to reduce the sample bias in rehearsal. Then, we present a centroid distance distillation that only stores the centroid distance to reduce the continual domain drift. The experiments on four continual learning datasets show the superiority of the proposed method, and the continual domain drift can be reduced.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2023

Preserving Linear Separability in Continual Learning by Backward Feature Projection

Catastrophic forgetting has been a major challenge in continual learning...
research
06/02/2021

Online Coreset Selection for Rehearsal-based Continual Learning

A dataset is a shred of crucial evidence to describe a task. However, ea...
research
07/22/2023

Revisiting Distillation for Continual Learning on Visual Question Localized-Answering in Robotic Surgery

The visual-question localized-answering (VQLA) system can serve as a kno...
research
12/16/2021

Effective prevention of semantic drift as angular distance in memory-less continual deep neural networks

Lifelong machine learning or continual learning models attempt to learn ...
research
06/11/2019

Continual Reinforcement Learning deployed in Real-life using Policy Distillation and Sim2Real Transfer

We focus on the problem of teaching a robot to solve tasks presented seq...
research
08/04/2020

Online Continual Learning under Extreme Memory Constraints

Continual Learning (CL) aims to develop agents emulating the human abili...
research
11/22/2022

Gated Class-Attention with Cascaded Feature Drift Compensation for Exemplar-free Continual Learning of Vision Transformers

In this paper we propose a new method for exemplar-free class incrementa...

Please sign up or login with your details

Forgot password? Click here to reset