A Simple Framework for Contrastive Learning of Visual Representations

02/13/2020
by   Ting Chen, et al.
13

This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5 top-1 accuracy, which is a 7 state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1 outperforming AlexNet with 100X fewer labels.

READ FULL TEXT

Authors

page 4

page 7

page 13

page 16

10/19/2020

CLAR: Contrastive Learning of Auditory Representations

Learning rich visual representations using contrastive self-supervised l...
09/15/2021

Deep Bregman Divergence for Contrastive Learning of Visual Representations

Deep Bregman divergence measures divergence of data points using neural ...
01/13/2022

Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

Despite recent progress made by self-supervised methods in representatio...
01/26/2021

Revisiting Contrastive Learning for Few-Shot Classification

Instance discrimination based contrastive learning has emerged as a lead...
10/02/2020

Hard Negative Mixing for Contrastive Learning

Contrastive learning has become a key component of self-supervised learn...
06/10/2022

Federated Momentum Contrastive Clustering

We present federated momentum contrastive clustering (FedMCC), a learnin...
04/09/2021

Towards Fine-grained Visual Representations by Combining Contrastive Learning with Image Reconstruction and Attention-weighted Pooling

This paper presents Contrastive Reconstruction, ConRec - a self-supervis...

Code Repositories

simclr

SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners


view repo

SimCLR-in-TensorFlow-2

(Minimally) implements SimCLR (https://arxiv.org/abs/2002.05709) in TensorFlow 2.


view repo

MoCo-Pytorch

An unofficial Pytorch implementation of "Improved Baselines with Momentum Contrastive Learning" (MoCoV2) - X. Chen, et al.


view repo

ConVIRT-pytorch

Contrastive Learning Representations for Images and Text Pairs. Pytorch implementation of ConVIRT Paper.


view repo

SimCLR_pytorch

PyTorch implementation of arxiv.org/pdf/2002.05709.pdf.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.