UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

03/19/2021
by   Zhigang Dai, et al.
0

Momentum Contrast (MoCo) achieves great success for unsupervised visual representation. However, there are a lot of supervised and semi-supervised datasets, which are already labeled. To fully utilize the label annotations, we propose Unified Momentum Contrast (UniMoCo), which extends MoCo to support arbitrary ratios of labeled data and unlabeled data training. Compared with MoCo, UniMoCo has two modifications as follows: (1) Different from a single positive pair in MoCo, we maintain multiple positive pairs on-the-fly by comparing the query label to a label queue. (2) We propose a Unified Contrastive(UniCon) loss to support an arbitrary number of positives and negatives in a unified pair-wise optimization perspective. Our UniCon is more reasonable and powerful than the supervised contrastive loss in theory and practice. In our experiments, we pre-train multiple UniMoCo models with different ratios of ImageNet labels and evaluate the performance on various downstream tasks. Experiment results show that UniMoCo generalizes well for unsupervised, semi-supervised and supervised visual representation learning.

READ FULL TEXT
research
11/13/2019

Momentum Contrast for Unsupervised Visual Representation Learning

We present Momentum Contrast (MoCo) for unsupervised visual representati...
research
10/05/2020

CO2: Consistent Contrast for Unsupervised Visual Representation Learning

Contrastive learning has been adopted as a core method for unsupervised ...
research
11/23/2020

CoMatch: Semi-supervised Learning with Contrastive Graph Regularization

Semi-supervised learning has been an effective paradigm for leveraging u...
research
06/05/2022

Semi-Supervised Learning for Mars Imagery Classification and Segmentation

With the progress of Mars exploration, numerous Mars image data are coll...
research
11/11/2021

Semantic-aware Representation Learning Via Probability Contrastive Loss

Recent feature contrastive learning (FCL) has shown promising performanc...
research
10/30/2022

Generate, Discriminate and Contrast: A Semi-Supervised Sentence Representation Learning Framework

Most sentence embedding techniques heavily rely on expensive human-annot...
research
12/09/2017

Semi-supervised Multimodal Hashing

Retrieving nearest neighbors across correlated data in multiple modaliti...

Please sign up or login with your details

Forgot password? Click here to reset