Conditional Negative Sampling for Contrastive Learning of Visual Representations

10/05/2020
by   Mike Wu, et al.
7

Recent methods for learning unsupervised visual representations, dubbed contrastive learning, optimize the noise-contrastive estimation (NCE) bound on mutual information between two views of an image. NCE uses randomly sampled negative examples to normalize the objective. In this paper, we show that choosing difficult negatives, or those more similar to the current instance, can yield stronger representations. To do this, we introduce a family of mutual information estimators that sample negatives conditionally – in a "ring" around each positive. We prove that these estimators lower-bound mutual information, with higher bias but lower variance than NCE. Experimentally, we find our approach, applied on top of existing models (IR, CMC, and MoCo) improves accuracy by 2-5 four standard image datasets. Moreover, we find continued benefits when transferring features to a variety of new image distributions from the Meta-Dataset collection and to a variety of downstream tasks such as object detection, instance segmentation, and keypoint detection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2020

On Mutual Information in Contrastive Learning for Visual Representations

In recent years, several unsupervised, "contrastive" learning algorithms...
research
07/20/2020

Multi-label Contrastive Predictive Coding

Variational mutual information (MI) estimators are widely used in unsupe...
research
06/25/2021

Decomposed Mutual Information Estimation for Contrastive Representation Learning

Recent contrastive representation learning methods rely on estimating mu...
research
01/11/2021

Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation

Controlling bias in training datasets is vital for ensuring equal treatm...
research
12/14/2021

CLIP-Lite: Information Efficient Visual Representation Learning from Textual Annotations

We propose CLIP-Lite, an information efficient method for visual represe...
research
06/10/2021

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations

Contrastive self-supervised learning has outperformed supervised pretrai...
research
07/06/2021

Contrastive Multimodal Fusion with TupleInfoNCE

This paper proposes a method for representation learning of multimodal d...

Please sign up or login with your details

Forgot password? Click here to reset