The NT-Xent loss upper bound

05/06/2022
by   Wilhelm Ågren, et al.
0

Self-supervised learning is a growing paradigm in deep representation learning, showing great generalization capabilities and competitive performance in low-labeled data regimes. The SimCLR framework proposes the NT-Xent loss for contrastive representation learning. The objective of the loss function is to maximize agreement, similarity, between sampled positive pairs. This short paper derives and proposes an upper bound for the loss and average similarity. An analysis of the implications is however not provided, but we strongly encourage anyone in the field to conduct this.

READ FULL TEXT

page 1

page 2

page 3

research
11/25/2021

Self-Distilled Self-Supervised Representation Learning

State-of-the-art frameworks in self-supervised learning have recently sh...
research
06/07/2022

Extending Momentum Contrast with Cross Similarity Consistency Regularization

Contrastive self-supervised representation learning methods maximize the...
research
08/08/2022

Self-Supervised Contrastive Representation Learning for 3D Mesh Segmentation

3D deep learning is a growing field of interest due to the vast amount o...
research
09/16/2020

A priori guarantees of finite-time convergence for Deep Neural Networks

In this paper, we perform Lyapunov based analysis of the loss function t...
research
08/31/2020

A Framework For Contrastive Self-Supervised Learning And Designing A New Approach

Contrastive self-supervised learning (CSL) is an approach to learn usefu...
research
04/09/2022

Self-Labeling Refinement for Robust Representation Learning with Bootstrap Your Own Latent

In this work, we have worked towards two major goals. Firstly, we have i...
research
05/07/2019

Forest Representation Learning Guided by Margin Distribution

In this paper, we reformulate the forest representation learning approac...

Please sign up or login with your details

Forgot password? Click here to reset