Learning Identity-Preserving Transformations on Data Manifolds

06/22/2021
by   Marissa Connor, et al.
0

Many machine learning techniques incorporate identity-preserving transformations into their models to generalize their performance to previously unseen data. These transformations are typically selected from a set of functions that are known to maintain the identity of an input when applied (e.g., rotation, translation, flipping, and scaling). However, there are many natural variations that cannot be labeled for supervision or defined through examination of the data. As suggested by the manifold hypothesis, many of these natural variations live on or near a low-dimensional, nonlinear manifold. Several techniques represent manifold variations through a set of learned Lie group operators that define directions of motion on the manifold. However theses approaches are limited because they require transformation labels when training their models and they lack a method for determining which regions of the manifold are appropriate for applying each specific operator. We address these limitations by introducing a learning strategy that does not require transformation labels and developing a method that learns the local regions where each operator is likely to be used while preserving the identity of inputs. Experiments on MNIST and Fashion MNIST highlight our model's ability to learn identity-preserving transformations on multi-class datasets. Additionally, we train on CelebA to showcase our model's ability to learn semantically meaningful transformations on complex datasets in an unsupervised manner.

READ FULL TEXT

page 7

page 8

page 10

page 23

page 25

page 27

page 28

research
01/07/2010

An Unsupervised Algorithm For Learning Lie Group Transformations

We present several theoretical contributions which allow Lie groups to b...
research
06/23/2023

Manifold Contrastive Learning with Variational Lie Group Operators

Self-supervised learning of deep neural networks has become a prevalent ...
research
12/23/2011

Learning Smooth Pattern Transformation Manifolds

Manifold models provide low-dimensional representations that are useful ...
research
03/31/2023

Learning Internal Representations of 3D Transformations from 2D Projected Inputs

When interacting in a three dimensional world, humans must estimate 3D s...
research
11/16/2019

AETv2: AutoEncoding Transformations for Self-Supervised Representation Learning by Minimizing Geodesic Distances in Lie Groups

Self-supervised learning by predicting transformations has demonstrated ...
research
05/22/2023

Representing Input Transformations by Low-Dimensional Parameter Subspaces

Deep models lack robustness to simple input transformations such as rota...
research
12/09/2020

GAN Steerability without optimization

Recent research has shown remarkable success in revealing "steering" dir...

Please sign up or login with your details

Forgot password? Click here to reset