EqCo: Equivalent Rules for Self-supervised Contrastive Learning

10/05/2020
by   Benjin Zhu, et al.
3

In this paper, we propose a method, named EqCo (Equivalent Rules for Contrastive Learning), to make self-supervised learning irrelevant to the number of negative samples in the contrastive learning framework. Inspired by the infomax principle, we point that the margin term in contrastive loss needs to be adaptively scaled according to the number of negative pairs in order to keep steady mutual information bound and gradient magnitude. EqCo bridges the performance gap among a wide range of negative sample sizes, so that for the first time, we can perform self-supervised contrastive training using only a few negative pairs (e.g.smaller than 256 per query) on large-scale vision tasks like ImageNet, while with little accuracy drop. This is quite a contrast to the widely used large batch training or memory bank mechanism in current practices. Equipped with EqCo, our simplified MoCo (SiMo) achieves comparable accuracy with MoCo v2 on ImageNet (linear evaluation protocol) while only involves 16 negative pairs per query instead of 65536, suggesting that large quantities of negative samples might not be a critical factor in contrastive learning frameworks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2023

Kernel-SSL: Kernel KL Divergence for Self-Supervised Learning

Contrastive learning usually compares one positive anchor sample with lo...
research
06/29/2021

Self-Contrastive Learning

This paper proposes a novel contrastive learning framework, coined as Se...
research
03/15/2022

InfoDCL: A Distantly Supervised Contrastive Learning Framework for Social Meaning

Existing supervised contrastive learning frameworks suffer from two majo...
research
03/30/2022

How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning

To avoid collapse in self-supervised learning (SSL), a contrastive loss ...
research
10/20/2020

BYOL works even without batch statistics

Bootstrap Your Own Latent (BYOL) is a self-supervised learning approach ...
research
01/28/2023

Unbiased and Efficient Self-Supervised Incremental Contrastive Learning

Contrastive Learning (CL) has been proved to be a powerful self-supervis...
research
04/28/2021

A Note on Connecting Barlow Twins with Negative-Sample-Free Contrastive Learning

In this report, we relate the algorithmic design of Barlow Twins' method...

Please sign up or login with your details

Forgot password? Click here to reset