Modulate Your Spectrum in Self-Supervised Learning

05/26/2023
by   Xi Weng, et al.
0

Whitening loss provides theoretical guarantee in avoiding feature collapse for self-supervised learning (SSL) using joint embedding architectures. One typical implementation of whitening loss is hard whitening that designs whitening transformation over embedding and imposes the loss on the whitened output. In this paper, we propose spectral transformation (ST) framework to map the spectrum of embedding to a desired distribution during forward pass, and to modulate the spectrum of embedding by implicit gradient update during backward pass. We show that whitening transformation is a special instance of ST by definition, and there exist other instances that can avoid collapse by our empirical investigation. Furthermore, we propose a new instance of ST, called IterNorm with trace loss (INTL). We theoretically prove that INTL can avoid collapse and modulate the spectrum of embedding towards an equal-eigenvalue distribution during the course of optimization. Moreover, INTL achieves 76.6 top-1 accuracy in linear evaluation on ImageNet using ResNet-50, which exceeds the performance of the supervised baseline, and this result is obtained by using a batch size of only 256. Comprehensive experiments show that INTL is a promising SSL method in practice. The code is available at https://github.com/winci-ai/intl.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2022

An Investigation into Whitening Loss for Self-supervised Learning

A desirable objective in self-supervised learning (SSL) is to avoid feat...
research
04/08/2023

EMP-SSL: Towards Self-Supervised Learning in One Training Epoch

Recently, self-supervised learning (SSL) has achieved tremendous success...
research
05/15/2021

Mean Shift for Self-Supervised Learning

Most recent self-supervised learning (SSL) algorithms learn features by ...
research
07/20/2021

ReSSL: Relational Self-Supervised Learning with Weak Augmentation

Self-supervised Learning (SSL) including the mainstream contrastive lear...
research
10/20/2022

MixMask: Revisiting Masked Siamese Self-supervised Learning in Asymmetric Distance

Recent advances in self-supervised learning integrate Masked Modeling an...
research
07/04/2021

Bag of Instances Aggregation Boosts Self-supervised Learning

Recent advances in self-supervised learning have experienced remarkable ...

Please sign up or login with your details

Forgot password? Click here to reset