Structuring Representation Geometry with Rotationally Equivariant Contrastive Learning

06/24/2023
by   Sharut Gupta, et al.
0

Self-supervised learning converts raw perceptual data such as images to a compact space where simple Euclidean distances measure meaningful variations in data. In this paper, we extend this formulation by adding additional geometric structure to the embedding space by enforcing transformations of input space to correspond to simple (i.e., linear) transformations of embedding space. Specifically, in the contrastive learning setting, we introduce an equivariance objective and theoretically prove that its minima forces augmentations on input space to correspond to rotations on the spherical embedding space. We show that merely combining our equivariant loss with a non-collapse term results in non-trivial representations, without requiring invariance to data augmentations. Optimal performance is achieved by also encouraging approximate invariance, where input augmentations correspond to small rotations. Our method, CARE: Contrastive Augmentation-induced Rotational Equivariance, leads to improved performance on downstream tasks, and ensures sensitivity in embedding space to important variations in data (e.g., color) that standard contrastive methods do not achieve. Code is available at https://github.com/Sharut/CARE.

READ FULL TEXT
research
05/26/2022

Triangular Contrastive Learning on Molecular Graphs

Recent contrastive learning methods have shown to be effective in variou...
research
08/22/2023

GOPro: Generate and Optimize Prompts in CLIP using Self-Supervised Learning

Large-scale foundation models, such as CLIP, have demonstrated remarkabl...
research
07/19/2022

Uncertainty in Contrastive Learning: On the Predictability of Downstream Performance

The superior performance of some of today's state-of-the-art deep learni...
research
09/14/2023

Hodge-Aware Contrastive Learning

Simplicial complexes prove effective in modeling data with multiway depe...
research
02/22/2023

Steerable Equivariant Representation Learning

Pre-trained deep image representations are useful for post-training task...
research
05/23/2022

Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods

Self-Supervised Learning (SSL) surmises that inputs and pairwise positiv...
research
11/02/2022

EquiMod: An Equivariance Module to Improve Self-Supervised Learning

Self-supervised visual representation methods are closing the gap with s...

Please sign up or login with your details

Forgot password? Click here to reset