An Investigation into Whitening Loss for Self-supervised Learning

10/07/2022
by   Xi Weng, et al.
8

A desirable objective in self-supervised learning (SSL) is to avoid feature collapse. Whitening loss guarantees collapse avoidance by minimizing the distance between embeddings of positive pairs under the conditioning that the embeddings from different views are whitened. In this paper, we propose a framework with an informative indicator to analyze whitening loss, which provides a clue to demystify several interesting phenomena as well as a pivoting point connecting to other SSL methods. We reveal that batch whitening (BW) based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. Based on our analysis, we propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based methods in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection reveal that the proposed CW-RGP possesses a promising potential for learning good representations. The code is available at https://github.com/winci-ai/CW-RGP.

READ FULL TEXT

page 19

page 20

research
05/26/2023

Modulate Your Spectrum in Self-Supervised Learning

Whitening loss provides theoretical guarantee in avoiding feature collap...
research
03/28/2020

Exploit Clues from Views: Self-Supervised and Regularized Learning for Multiview Object Recognition

Multiview recognition has been well studied in the literature and achiev...
research
10/20/2022

MixMask: Revisiting Masked Siamese Self-supervised Learning in Asymmetric Distance

Recent advances in self-supervised learning integrate Masked Modeling an...
research
04/02/2022

Mix-up Self-Supervised Learning for Contrast-agnostic Applications

Contrastive self-supervised learning has attracted significant research ...
research
10/15/2022

How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders

Masked Autoencoders (MAE) based on a reconstruction task have risen to b...
research
07/07/2022

An Embedding-Dynamic Approach to Self-supervised Learning

A number of recent self-supervised learning methods have shown impressiv...
research
03/04/2023

Towards a Unified Theoretical Understanding of Non-contrastive Learning via Rank Differential Mechanism

Recently, a variety of methods under the name of non-contrastive learnin...

Please sign up or login with your details

Forgot password? Click here to reset