When Does Contrastive Visual Representation Learning Work?

05/12/2021
by   Elijah Cole, et al.
4

Recent self-supervised representation learning techniques have largely closed the gap between supervised and unsupervised learning on ImageNet classification. While the particulars of pretraining on ImageNet are now relatively well understood, the field still lacks widely accepted best practices for replicating this success on other datasets. As a first step in this direction, we study contrastive self-supervised learning on four diverse large-scale datasets. By looking through the lenses of data quantity, data domain, data quality, and task granularity, we provide new insights into the necessary conditions for successful self-supervised learning. Our key findings include observations such as: (i) the benefit of additional pretraining data beyond 500k images is modest, (ii) adding pretraining images from another domain does not lead to more general representations, (iii) corrupted pretraining images have a disparate impact on supervised and self-supervised pretraining, and (iv) contrastive learning lags far behind supervised learning on fine-grained visual classification tasks.

READ FULL TEXT

page 1

page 5

page 6

page 8

page 9

page 12

page 13

page 15

research
12/01/2020

Towards Good Practices in Self-supervised Representation Learning

Self-supervised representation learning has seen remarkable progress in ...
research
10/15/2020

Representation Learning via Invariant Causal Mechanisms

Self-supervised learning has emerged as a strategy to reduce the relianc...
research
12/10/2021

Tradeoffs Between Contrastive and Supervised Learning: An Empirical Study

Contrastive learning has made considerable progress in computer vision, ...
research
06/21/2022

Few-Max: Few-Shot Domain Adaptation for Unsupervised Contrastive Representation Learning

Contrastive self-supervised learning methods learn to map data points su...
research
03/30/2021

Benchmarking Representation Learning for Natural World Image Collections

Recent progress in self-supervised learning has resulted in models that ...
research
04/27/2022

Offline Visual Representation Learning for Embodied Navigation

How should we learn visual representations for embodied agents that must...
research
10/13/2022

The Hidden Uniform Cluster Prior in Self-Supervised Learning

A successful paradigm in representation learning is to perform self-supe...

Please sign up or login with your details

Forgot password? Click here to reset