Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning

12/21/2022
by   Julien Denize, et al.
0

Contrastive representation learning has proven to be an effective self-supervised learning method for images and videos. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations between the instances, or semantic similarity and dissimilarity, that contrastive learning harms by considering all negatives as noise. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities. We validate empirically our approach on both image and video representation learning. We show that SCE performs competitively with the state of the art on the ImageNet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks. We also show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks.

READ FULL TEXT
research
11/29/2021

Similarity Contrastive Estimation for Self-Supervised Soft Contrastive Learning

Contrastive representation learning has proven to be an effective self-s...
research
08/13/2022

MetricBERT: Text Representation Learning via Self-Supervised Triplet Training

We present MetricBERT, a BERT-based model that learns to embed text unde...
research
01/12/2023

Self-Supervised Image-to-Point Distillation via Semantically Tolerant Contrastive Loss

An effective framework for learning 3D representations for perception ta...
research
12/04/2020

Hierarchical Semantic Aggregation for Contrastive Representation Learning

Self-supervised learning based on instance discrimination has shown rema...
research
12/25/2020

Taxonomy of multimodal self-supervised representation learning

Sensory input from multiple sources is crucial for robust and coherent h...
research
10/19/2021

Constrained Mean Shift for Representation Learning

We are interested in representation learning from labeled or unlabeled d...
research
03/21/2021

Self-supervised Representation Learning with Relative Predictive Coding

This paper introduces Relative Predictive Coding (RPC), a new contrastiv...

Please sign up or login with your details

Forgot password? Click here to reset