DeepAI AI Chat
Log In Sign Up

Masked Contrastive Representation Learning

11/11/2022
by   Yuchong Yao, et al.
The University of Melbourne
0

Masked image modelling (e.g., Masked AutoEncoder) and contrastive learning (e.g., Momentum Contrast) have shown impressive performance on unsupervised visual representation learning. This work presents Masked Contrastive Representation Learning (MACRL) for self-supervised visual pre-training. In particular, MACRL leverages the effectiveness of both masked image modelling and contrastive learning. We adopt an asymmetric setting for the siamese network (i.e., encoder-decoder structure in both branches), where one branch with higher mask ratio and stronger data augmentation, while the other adopts weaker data corruptions. We optimize a contrastive learning objective based on the learned features from the encoder in both branches. Furthermore, we minimize the L_1 reconstruction loss according to the decoders' outputs. In our experiments, MACRL presents superior results on various vision benchmarks, including CIFAR-10, CIFAR-100, Tiny-ImageNet, and two other ImageNet subsets. Our framework provides unified insights on self-supervised visual pre-training and future research.

READ FULL TEXT
11/19/2020

Heterogeneous Contrastive Learning: Encoding Spatial Information for Compact Visual Representations

Contrastive learning has achieved great success in self-supervised visua...
07/27/2022

Contrastive Masked Autoencoders are Stronger Vision Learners

Masked image modeling (MIM) has achieved promising results on various vi...
02/07/2022

Crafting Better Contrastive Views for Siamese Representation Learning

Recent self-supervised contrastive learning methods greatly benefit from...
07/14/2022

Benchmarking Omni-Vision Representation through the Lens of Visual Realms

Though impressive performance has been achieved in specific visual realm...
10/19/2021

Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent Space Distribution Matching in WAE

Wasserstein autoencoder (WAE) shows that matching two distributions is e...
08/31/2020

A Framework For Contrastive Self-Supervised Learning And Designing A New Approach

Contrastive self-supervised learning (CSL) is an approach to learn usefu...
10/30/2022

A simple, efficient and scalable contrastive masked autoencoder for learning visual representations

We introduce CAN, a simple, efficient and scalable method for self-super...

Code Repositories

MACRL

Masked Contrastive Representation Learning


view repo