Contrastive Continual Learning with Feature Propagation

12/03/2021
by   Xuejun Han, et al.
0

Classical machine learners are designed only to tackle one task without capability of adopting new emerging tasks or classes whereas such capacity is more practical and human-like in the real world. To address this shortcoming, continual machine learners are elaborated to commendably learn a stream of tasks with domain and class shifts among different tasks. In this paper, we propose a general feature-propagation based contrastive continual learning method which is capable of handling multiple continual learning scenarios. Specifically, we align the current and previous representation spaces by means of feature propagation and contrastive representation learning to bridge the domain shifts among distinct tasks. To further mitigate the class-wise shifts of the feature representation, a supervised contrastive loss is exploited to make the example embeddings of the same class closer than those of different classes. The extensive experimental results demonstrate the outstanding performance of the proposed method on six continual learning benchmarks compared to a group of cutting-edge continual learning methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2023

Towards Robust Feature Learning with t-vFM Similarity for Continual Learning

Continual learning has been developed using standard supervised contrast...
research
03/16/2023

Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning

Prototype, as a representation of class embeddings, has been explored to...
research
08/21/2023

Real World Time Series Benchmark Datasets with Distribution Shifts: Global Crude Oil Price and Volatility

The scarcity of task-labeled time-series benchmarks in the financial dom...
research
02/07/2022

Dataset Condensation with Contrastive Signals

Recent studies have demonstrated that gradient matching-based dataset sy...
research
08/05/2022

Task-agnostic Continual Hippocampus Segmentation for Smooth Population Shifts

Most continual learning methods are validated in settings where task bou...
research
04/27/2022

Executive Function: A Contrastive Value Policy for Resampling and Relabeling Perceptions via Hindsight Summarization?

We develop the few-shot continual learning task from first principles an...

Please sign up or login with your details

Forgot password? Click here to reset