Sy-CON: Symmetric Contrastive Loss for Continual Self-Supervised Representation Learning

06/08/2023
by   Sungmin Cha, et al.
0

We introduce a novel and general loss function, called Symmetric Contrastive (Sy-CON) loss, for effective continual self-supervised learning (CSSL). We first argue that the conventional loss form of continual learning which consists of single task-specific loss (for plasticity) and a regularizer (for stability) may not be ideal for contrastive loss based CSSL that focus on representation learning. Our reasoning is that, in contrastive learning based methods, the task-specific loss would suffer from decreasing diversity of negative samples and the regularizer may hinder learning new distinctive representations. To that end, we propose Sy-CON that consists of two losses (one for plasticity and the other for stability) with symmetric dependence on current and past models' negative sample embeddings. We argue our model can naturally find good trade-off between the plasticity and stability without any explicit hyperparameter tuning. We validate the effectiveness of our approach through extensive experiments, demonstrating that MoCo-based implementation of Sy-CON loss achieves superior performance compared to other state-of-the-art CSSL methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2021

Co^2L: Contrastive Continual Learning

Recent breakthroughs in self-supervised learning show that such algorith...
research
12/08/2021

Self-Supervised Models are Continual Learners

Self-supervised models have been shown to produce comparable or better v...
research
09/29/2022

Understanding Collapse in Non-Contrastive Siamese Representation Learning

Contrastive methods have led a recent surge in the performance of self-s...
research
12/02/2021

Probabilistic Contrastive Loss for Self-Supervised Learning

This paper proposes a probabilistic contrastive loss function for self-s...
research
12/10/2021

Concept Representation Learning with Contrastive Self-Supervised Learning

Concept-oriented deep learning (CODL) is a general approach to meet the ...
research
07/10/2023

Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning

Federated continual learning (FCL) learns incremental tasks over time fr...
research
06/08/2023

Contrastive Representation Disentanglement for Clustering

Clustering continues to be a significant and challenging task. Recent st...

Please sign up or login with your details

Forgot password? Click here to reset