What Should Not Be Contrastive in Contrastive Learning

08/13/2020
by   Tete Xiao, et al.
15

Recent self-supervised contrastive methods have been able to produce impressive transferable visual representations by learning to be invariant to different data augmentations. However, these methods implicitly assume a particular set of representational invariances (e.g., invariance to color), and can perform poorly when a downstream task violates this assumption (e.g., distinguishing red vs. yellow cars). We introduce a contrastive learning framework which does not require prior knowledge of specific, task-dependent invariances. Our model learns to capture varying and invariant factors for visual representations by constructing separate embedding spaces, each of which is invariant to all but one augmentation. We use a multi-head network with a shared backbone which captures information across each augmentation and alone outperforms all baselines on downstream tasks. We further find that the concatenation of the invariant and varying spaces performs best across all tasks we investigate, including coarse-grained, fine-grained, and few-shot downstream classification tasks, and various data corruptions.

READ FULL TEXT

page 3

page 7

page 8

research
02/05/2023

CIPER: Combining Invariant and Equivariant Representations Using Contrastive and Predictive Learning

Self-supervised representation learning (SSRL) methods have shown great ...
research
06/01/2022

Rethinking the Augmentation Module in Contrastive Learning: Learning Hierarchical Augmentation Invariance with Expanded Views

A data augmentation module is utilized in contrastive learning to transf...
research
04/07/2021

Contrastive Learning of Global and Local Audio-Visual Representations

Contrastive learning has delivered impressive results in many audio-visu...
research
06/09/2021

CLCC: Contrastive Learning for Color Constancy

In this paper, we present CLCC, a novel contrastive learning framework f...
research
06/08/2023

Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Alignment

The egocentric and exocentric viewpoints of a human activity look dramat...
research
01/29/2023

The Influences of Color and Shape Features in Visual Contrastive Learning

In the field of visual representation learning, performance of contrasti...
research
06/16/2022

Let Invariant Rationale Discovery Inspire Graph Contrastive Learning

Leading graph contrastive learning (GCL) methods perform graph augmentat...

Please sign up or login with your details

Forgot password? Click here to reset