Mutual Contrastive Learning for Visual Representation Learning

04/26/2021
by   Chuanguang Yang, et al.
0

We present a collaborative learning method called Mutual Contrastive Learning (MCL) for general visual representation learning. The core idea of MCL is to perform mutual interaction and transfer of contrastive distributions among a cohort of models. Benefiting from MCL, each model can learn extra contrastive knowledge from others, leading to more meaningful feature representations for visual recognition tasks. We emphasize that MCL is conceptually simple yet empirically powerful. It is a generic framework that can be applied to both supervised and self-supervised representation learning. Experimental results on supervised and self-supervised image classification, transfer learning and few-shot learning show that MCL can lead to consistent performance gains, demonstrating that MCL can guide the network to generate better feature representations.

READ FULL TEXT
research
07/23/2022

Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition

The teacher-free online Knowledge Distillation (KD) aims to train an ens...
research
03/21/2023

Visual Representation Learning from Unlabeled Video using Contrastive Masked Autoencoders

Masked Autoencoders (MAEs) learn self-supervised representations by rand...
research
12/31/2021

Representation Learning via Consistent Assignment of Views to Clusters

We introduce Consistent Assignment for Representation Learning (CARL), a...
research
09/05/2022

A Study on Representation Transfer for Few-Shot Learning

Few-shot classification aims to learn to classify new object categories ...
research
12/01/2022

CL4CTR: A Contrastive Learning Framework for CTR Prediction

Many Click-Through Rate (CTR) prediction works focused on designing adva...
research
12/21/2022

Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning

Traditional approaches to RL have focused on learning decision policies ...
research
11/09/2022

miCSE: Mutual Information Contrastive Learning for Low-shot Sentence Embeddings

This paper presents miCSE, a mutual information-based Contrastive learni...

Please sign up or login with your details

Forgot password? Click here to reset