Contrastive Learning with Stronger Augmentations

04/15/2021
by   Xiao Wang, et al.
0

Representation learning has significantly been developed with the advance of contrastive learning methods. Most of those methods have benefited from various data augmentations that are carefully designated to maintain their identities so that the images transformed from the same instance can still be retrieved. However, those carefully designed transformations limited us to further explore the novel patterns exposed by other transformations. Meanwhile, as found in our experiments, the strong augmentations distorted the images' structures, resulting in difficult retrieval. Thus, we propose a general framework called Contrastive Learning with Stronger Augmentations (CLSA) to complement current contrastive learning approaches. Here, the distribution divergence between the weakly and strongly augmented images over the representation bank is adopted to supervise the retrieval of strongly augmented queries from a pool of instances. Experiments on the ImageNet dataset and downstream datasets showed the information from the strongly augmented images can significantly boost the performance. For example, CLSA achieves top-1 accuracy of 76.2 with a standard ResNet-50 architecture with a single-layer classifier fine-tuned, which is almost the same level as 76.5 code and pre-trained models are available in https://github.com/maple-research-lab/CLSA.

READ FULL TEXT

page 3

page 5

page 8

page 10

research
06/01/2022

Strongly Augmented Contrastive Clustering

Deep clustering has attracted increasing attention in recent years due t...
research
10/16/2022

Sentence Representation Learning with Generative Objective rather than Contrastive Objective

Though offering amazing contextualized token-level representations, curr...
research
04/01/2021

Jigsaw Clustering for Unsupervised Visual Representation Learning

Unsupervised representation learning with contrastive learning achieved ...
research
05/18/2021

Contrastive Model Inversion for Data-Free Knowledge Distillation

Model inversion, whose goal is to recover training data from a pre-train...
research
11/24/2022

Hierarchical Consistent Contrastive Learning for Skeleton-Based Action Recognition with Growing Augmentations

Contrastive learning has been proven beneficial for self-supervised skel...
research
06/10/2021

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations

Contrastive self-supervised learning has outperformed supervised pretrai...
research
07/27/2022

Optimizing transformations for contrastive learning in a differentiable framework

Current contrastive learning methods use random transformations sampled ...

Please sign up or login with your details

Forgot password? Click here to reset