Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations

06/10/2021 ∙ by Wouter Van Gansbeke, et al. ∙ 9

Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection. However, current methods are still primarily applied to curated datasets like ImageNet. In this paper, we first study how biases in the dataset affect existing methods. Our results show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets. Second, given the generality of the approach, we try to realize further gains with minor modifications. We show that learning additional invariances – through the use of multi-scale cropping, stronger augmentations and nearest neighbors – improves the representations. Finally, we observe that MoCo learns spatially structured representations when trained with a multi-crop strategy. The representations can be used for semantic segment retrieval and video instance segmentation without finetuning. Moreover, the results are on par with specialized models. We hope this work will serve as a useful study for other researchers. The code and models will be available at



There are no comments yet.


page 3

page 5

page 6

page 7

page 15

page 16

page 17

Code Repositories


SCAN: Learning to Classify Images without Labels, incl. SimCLR. [ECCV 2020]

view repo


Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [2021]

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.