AETv2: AutoEncoding Transformations for Self-Supervised Representation Learning by Minimizing Geodesic Distances in Lie Groups

11/16/2019
by   Feng Lin, et al.
0

Self-supervised learning by predicting transformations has demonstrated outstanding performances in both unsupervised and (semi-)supervised tasks. Among the state-of-the-art methods is the AutoEncoding Transformations (AET) by decoding transformations from the learned representations of original and transformed images. Both deterministic and probabilistic AETs rely on the Euclidean distance to measure the deviation of estimated transformations from their groundtruth counterparts. However, this assumption is questionable as a group of transformations often reside on a curved manifold rather staying in a flat Euclidean space. For this reason, we should use the geodesic to characterize how an image transform along the manifold of a transformation group, and adopt its length to measure the deviation between transformations. Particularly, we present to autoencode a Lie group of homography transformations PG(2) to learn image representations. For this, we make an estimate of the intractable Riemannian logarithm by projecting PG(2) to a subgroup of rotation transformations SO(3) that allows the closed-form expression of geodesic distances. Experiments demonstrate the proposed AETv2 model outperforms the previous version as well as the other state-of-the-art self-supervised models in multiple tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2023

Manifold Contrastive Learning with Variational Lie Group Operators

Self-supervised learning of deep neural networks has become a prevalent ...
research
10/24/2022

Robust Self-Supervised Learning with Lie Groups

Deep learning has led to remarkable advances in computer vision. Even so...
research
12/05/2020

Joint Estimation of Image Representations and their Lie Invariants

Images encode both the state of the world and its content. The former is...
research
01/07/2010

An Unsupervised Algorithm For Learning Lie Group Transformations

We present several theoretical contributions which allow Lie groups to b...
research
02/17/2022

Survey on Self-supervised Representation Learning Using Image Transformations

Deep neural networks need huge amount of training data, while in real wo...
research
03/21/2022

Disentangling Patterns and Transformations from One Sequence of Images with Shape-invariant Lie Group Transformer

An effective way to model the complex real world is to view the world as...
research
06/22/2021

Learning Identity-Preserving Transformations on Data Manifolds

Many machine learning techniques incorporate identity-preserving transfo...

Please sign up or login with your details

Forgot password? Click here to reset