Whitening for Self-Supervised Representation Learning

07/13/2020
by   Aleksandr Ermolov, et al.
0

Recent literature on self-supervised learning is based on the contrastive loss, where image instances which share the same semantic content ("positives") are contrasted with instances extracted from other images ("negatives"). However, in order for the learning to be effective, a lot of negatives should be compared with a positive pair. This is not only computationally demanding, but it also requires that the positive and the negative representations are kept consistent with each other over a long training period. In this paper we propose a different direction and a new loss function for self-supervised learning which is based on the whitening of the latent-space features. The whitening operation has a "scattering" effect on the batch samples, which compensates the lack of a large number of negatives, avoiding degenerate solutions where all the sample representations collapse to a single point. We empirically show that our loss accelerates self-supervised training and the learned representations are much more effective for downstream tasks than previously published work.

READ FULL TEXT
research
11/10/2022

Self-supervised learning of audio representations using angular contrastive loss

In Self-Supervised Learning (SSL), various pretext tasks are designed fo...
research
06/07/2022

Extending Momentum Contrast with Cross Similarity Consistency Regularization

Contrastive self-supervised representation learning methods maximize the...
research
11/25/2021

Self-Distilled Self-Supervised Representation Learning

State-of-the-art frameworks in self-supervised learning have recently sh...
research
12/09/2021

Exploring the Equivalence of Siamese Self-Supervised Learning via A Unified Gradient Framework

Self-supervised learning has shown its great potential to extract powerf...
research
05/31/2023

Additional Positive Enables Better Representation Learning for Medical Images

This paper presents a new way to identify additional positive pairs for ...
research
11/15/2021

QK Iteration: A Self-Supervised Representation Learning Algorithm for Image Similarity

Self-supervised representation learning is a fundamental problem in comp...
research
09/25/2020

G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling

In the realms of computer vision, it is evident that deep neural network...

Please sign up or login with your details

Forgot password? Click here to reset