Crafting Better Contrastive Views for Siamese Representation Learning

02/07/2022
by   Xiangyu Peng, et al.
0

Recent self-supervised contrastive learning methods greatly benefit from the Siamese structure that aims at minimizing distances between positive pairs. For high performance Siamese representation learning, one of the keys is to design good contrastive pairs. Most previous works simply apply random sampling to make different crops of the same image, which overlooks the semantic information that may degrade the quality of views. In this work, we propose ContrastiveCrop, which could effectively generate better crops for Siamese representation learning. Firstly, a semantic-aware object localization strategy is proposed within the training process in a fully unsupervised manner. This guides us to generate contrastive views which could avoid most false positives (i.e., object vs. background). Moreover, we empirically find that views with similar appearances are trivial for the Siamese model training. Thus, a center-suppressed sampling is further designed to enlarge the variance of crops. Remarkably, our method takes a careful consideration of positive pairs for contrastive learning with negligible extra training overhead. As a plug-and-play and framework-agnostic module, ContrastiveCrop consistently improves SimCLR, MoCo, BYOL, SimSiam by 0.4 CIFAR-10, CIFAR-100, Tiny ImageNet and STL-10. Superior results are also achieved on downstream detection and segmentation tasks when pre-trained on ImageNet-1K.

READ FULL TEXT

page 1

page 3

page 4

research
05/15/2023

Learning Better Contrastive View from Radiologist's Gaze

Recent self-supervised contrastive learning methods greatly benefit from...
research
07/22/2023

Hallucination Improves the Performance of Unsupervised Visual Representation Learning

Contrastive learning models based on Siamese structure have demonstrated...
research
04/30/2018

Sampling strategies in Siamese Networks for unsupervised speech representation learning

Recent studies have investigated siamese network architectures for learn...
research
10/14/2020

Viewmaker Networks: Learning Views for Unsupervised Representation Learning

Many recent methods for unsupervised representation learning involve tra...
research
09/29/2022

Understanding Collapse in Non-Contrastive Siamese Representation Learning

Contrastive methods have led a recent surge in the performance of self-s...
research
06/07/2023

ScoreCL: Augmentation-Adaptive Contrastive Learning via Score-Matching Function

Self-supervised contrastive learning (CL) has achieved state-of-the-art ...
research
07/17/2022

Fast-MoCo: Boost Momentum-based Contrastive Learning with Combinatorial Patches

Contrastive-based self-supervised learning methods achieved great succes...

Please sign up or login with your details

Forgot password? Click here to reset