Self-Labeling Refinement for Robust Representation Learning with Bootstrap Your Own Latent

04/09/2022
by   Siddhant Garg, et al.
0

In this work, we have worked towards two major goals. Firstly, we have investigated the importance of Batch Normalisation (BN) layers in a non-contrastive representation learning framework called Bootstrap Your Own Latent (BYOL). We conducted several experiments to conclude that BN layers are not necessary for representation learning in BYOL. Moreover, BYOL only learns from the positive pairs of images but ignores other semantically similar images in the same input batch. For the second goal, we have introduced two new loss functions to determine the semantically similar pairs in the same input batch of images and reduce the distance between their representations. These loss functions are Cross-Cosine Similarity Loss (CCSL) and Cross-Sigmoid Similarity Loss (CSSL). Using the proposed loss functions, we are able to surpass the performance of Vanilla BYOL (71.04 loss (76.87 comparably with Vanilla BYOL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2021

Batch Curation for Unsupervised Contrastive Representation Learning

The state-of-the-art unsupervised contrastive visual representation lear...
research
06/28/2023

Semantic Positive Pairs for Enhancing Contrastive Instance Discrimination

Self-supervised learning algorithms based on instance discrimination eff...
research
04/19/2023

Denoising Cosine Similarity: A Theory-Driven Approach for Efficient Representation Learning

Representation learning has been increasing its impact on the research a...
research
01/12/2023

Self-Supervised Image-to-Point Distillation via Semantically Tolerant Contrastive Loss

An effective framework for learning 3D representations for perception ta...
research
05/06/2022

The NT-Xent loss upper bound

Self-supervised learning is a growing paradigm in deep representation le...
research
12/18/2018

Clustering-Oriented Representation Learning with Attractive-Repulsive Loss

The standard loss function used to train neural network classifiers, cat...
research
03/06/2018

The Contextual Loss for Image Transformation with Non-Aligned Data

Feed-forward CNNs trained for image transformation problems rely on loss...

Please sign up or login with your details

Forgot password? Click here to reset