Improved Baselines with Momentum Contrastive Learning

03/09/2020
by   Xinlei Chen, et al.
9

Contrastive unsupervised learning has recently shown encouraging progress, e.g., in Momentum Contrast (MoCo) and SimCLR. In this note, we verify the effectiveness of two of SimCLR's design improvements by implementing them in the MoCo framework. With simple modifications to MoCo—namely, using an MLP projection head and more data augmentation—we establish stronger baselines that outperform SimCLR and do not require large training batches. We hope this will make state-of-the-art unsupervised learning research more accessible. Code will be made public.

READ FULL TEXT

page 1

page 2

page 3

research
04/23/2021

DeepfakeUCL: Deepfake Detection via Unsupervised Contrastive Learning

Face deepfake detection has seen impressive results recently. Nearly all...
research
11/13/2019

Momentum Contrast for Unsupervised Visual Representation Learning

We present Momentum Contrast (MoCo) for unsupervised visual representati...
research
10/22/2020

Momentum Contrast Speaker Representation Learning

Unsupervised representation learning has shown remarkable achievement by...
research
05/15/2023

Improved baselines for vision-language pre-training

Contrastive learning has emerged as an efficient framework to learn mult...
research
10/16/2020

CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding

Data augmentation has been demonstrated as an effective strategy for imp...
research
12/21/2021

Max-Margin Contrastive Learning

Standard contrastive learning approaches usually require a large number ...
research
04/05/2023

Adaptive Data Augmentation for Contrastive Learning

In computer vision, contrastive learning is the most advanced unsupervis...

Please sign up or login with your details

Forgot password? Click here to reset