Manifold Contrastive Learning with Variational Lie Group Operators

06/23/2023
by   Kion Fallah, et al.
0

Self-supervised learning of deep neural networks has become a prevalent paradigm for learning representations that transfer to a variety of downstream tasks. Similar to proposed models of the ventral stream of biological vision, it is observed that these networks lead to a separation of category manifolds in the representations of the penultimate layer. Although this observation matches the manifold hypothesis of representation learning, current self-supervised approaches are limited in their ability to explicitly model this manifold. Indeed, current approaches often only apply augmentations from a pre-specified set of "positive pairs" during learning. In this work, we propose a contrastive learning approach that directly models the latent manifold using Lie group operators parameterized by coefficients with a sparsity-promoting prior. A variational distribution over these coefficients provides a generative model of the manifold, with samples which provide feature augmentations applicable both during contrastive training and downstream tasks. Additionally, learned coefficient distributions provide a quantification of which transformations are most likely at each point on the manifold while preserving identity. We demonstrate benefits in self-supervised benchmarks for image datasets, as well as a downstream semi-supervised task. In the former case, we demonstrate that the proposed methods can effectively apply manifold feature augmentations and improve learning both with and without a projection head. In the latter case, we demonstrate that feature augmentations sampled from learned Lie group operators can improve classification performance when using few labels.

READ FULL TEXT
research
03/08/2023

Self-Supervised Learning for Group Equivariant Neural Networks

This paper proposes a method to construct pretext tasks for self-supervi...
research
11/16/2019

AETv2: AutoEncoding Transformations for Self-Supervised Representation Learning by Minimizing Geodesic Distances in Lie Groups

Self-supervised learning by predicting transformations has demonstrated ...
research
06/22/2021

Learning Identity-Preserving Transformations on Data Manifolds

Many machine learning techniques incorporate identity-preserving transfo...
research
03/06/2023

Learning Efficient Coding of Natural Images with Maximum Manifold Capacity Representations

Self-supervised Learning (SSL) provides a strategy for constructing usef...
research
12/05/2020

Joint Estimation of Image Representations and their Lie Invariants

Images encode both the state of the world and its content. The former is...
research
04/29/2021

Hyperspherically Regularized Networks for BYOL Improves Feature Uniformity and Separability

Bootstrap Your Own Latent (BYOL) introduced an approach to self-supervis...
research
10/24/2022

Robust Self-Supervised Learning with Lie Groups

Deep learning has led to remarkable advances in computer vision. Even so...

Please sign up or login with your details

Forgot password? Click here to reset