With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations

04/29/2021
by   Debidatta Dwibedi, et al.
4

Self-supervised learning algorithms based on instance discrimination train encoders to be invariant to pre-defined transformations of the same instance. While most methods treat different views of the same image as positives for a contrastive loss, we are interested in using positives from other instances in the dataset. Our method, Nearest-Neighbor Contrastive Learning of visual Representations (NNCLR), samples the nearest neighbors from the dataset in the latent space, and treats them as positives. This provides more semantic variations than pre-defined transformations. We find that using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification, from 71.7 75.6 learning benchmarks we improve performance significantly when only 1 labels are available, from 53.8 method outperforms state-of-the-art methods (including supervised learning with ImageNet) on 8 out of 12 downstream datasets. Furthermore, we demonstrate empirically that our method is less reliant on complex data augmentations. We see a relative reduction of only 2.1 using only random crops.

READ FULL TEXT

page 3

page 9

page 12

research
08/14/2023

pNNCLR: Stochastic Pseudo Neighborhoods for Contrastive Learning based Unsupervised Representation Learning Problems

Nearest neighbor (NN) sampling provides more semantic variations than pr...
research
03/13/2023

Nearest-Neighbor Inter-Intra Contrastive Learning from Unlabeled Videos

Contrastive learning has recently narrowed the gap between self-supervis...
research
03/30/2023

Soft Neighbors are Positive Supporters in Contrastive Visual Representation Learning

Contrastive learning methods train visual encoders by comparing views fr...
research
10/10/2021

Weakly Supervised Contrastive Learning

Unsupervised visual representation learning has gained much attention fr...
research
03/23/2023

Adaptive Similarity Bootstrapping for Self-Distillation

Most self-supervised methods for representation learning leverage a cros...
research
04/20/2023

Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget

Masked Image Modeling (MIM) methods, like Masked Autoencoders (MAE), eff...
research
10/18/2022

Unsupervised visualization of image datasets using contrastive learning

Visualization methods based on the nearest neighbor graph, such as t-SNE...

Please sign up or login with your details

Forgot password? Click here to reset