Exploring the Equivalence of Siamese Self-Supervised Learning via A Unified Gradient Framework

12/09/2021
by   Chenxin Tao, et al.
0

Self-supervised learning has shown its great potential to extract powerful visual representations without human annotations. Various works are proposed to deal with self-supervised learning from different perspectives: (1) contrastive learning methods (e.g., MoCo, SimCLR) utilize both positive and negative samples to guide the training direction; (2) asymmetric network methods (e.g., BYOL, SimSiam) get rid of negative samples via the introduction of a predictor network and the stop-gradient operation; (3) feature decorrelation methods (e.g., Barlow Twins, VICReg) instead aim to reduce the redundancy between feature dimensions. These methods appear to be quite different in the designed loss functions from various motivations. The final accuracy numbers also vary, where different networks and tricks are utilized in different works. In this work, we demonstrate that these methods can be unified into the same form. Instead of comparing their loss functions, we derive a unified formula through gradient analysis. Furthermore, we conduct fair and detailed experiments to compare their performances. It turns out that there is little gap between these methods, and the use of momentum encoder is the key factor to boost performance. From this unified framework, we propose UniGrad, a simple but effective gradient form for self-supervised learning. It does not require a memory bank or a predictor network, but can still achieve state-of-the-art performance and easily adopt other training strategies. Extensive experiments on linear evaluation and many downstream tasks also show its effectiveness. Code shall be released.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2020

Whitening for Self-Supervised Representation Learning

Recent literature on self-supervised learning is based on the contrastiv...
research
09/29/2022

Joint Embedding Self-Supervised Learning in the Kernel Regime

The fundamental goal of self-supervised learning (SSL) is to produce use...
research
03/16/2022

Relational Self-Supervised Learning

Self-supervised Learning (SSL) including the mainstream contrastive lear...
research
03/30/2022

How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning

To avoid collapse in self-supervised learning (SSL), a contrastive loss ...
research
08/08/2022

Understanding Masked Image Modeling via Learning Occlusion Invariant Feature

Recently, Masked Image Modeling (MIM) achieves great success in self-sup...
research
10/01/2020

Understanding Self-supervised Learning with Dual Deep Networks

We propose a novel theoretical framework to understand self-supervised l...
research
08/11/2022

On the Pros and Cons of Momentum Encoder in Self-Supervised Visual Representation Learning

Exponential Moving Average (EMA or momentum) is widely used in modern se...

Please sign up or login with your details

Forgot password? Click here to reset