Toroidal AutoEncoder

03/28/2019
by   Maciej Mikulski, et al.
0

Enforcing distributions of latent variables in neural networks is an active subject. It is vital in all kinds of generative models, where we want to be able to interpolate between points in the latent space, or sample from it. Modern generative AutoEncoders (AE) like WAE, SWAE, CWAE add a regularizer to the standard (deterministic) AE, which allows to enforce Gaussian distribution in the latent space. Enforcing different distributions, especially topologically nontrivial, might bring some new interesting possibilities, but this subject seems unexplored so far. This article proposes a new approach to enforce uniform distribution on d-dimensional torus. We introduce a circular spring loss, which enforces minibatch points to be equally spaced and satisfy cyclic boundary conditions. As example of application we propose multiple-path morphing. Minimal distance geodesic between two points in uniform distribution on latent space of angles becomes a line, however, torus topology allows us to choose such lines in alternative ways, going through different edges of [-π,π]^d. Further applications to explore can be for example trying to learn real-life topologically nontrivial spaces of features, like rotations to automatically recognize 2D rotation of an object in picture by training on relative angles, or even 3D rotations by additionally using spherical features - this way morphing should be close to object rotation.

READ FULL TEXT
research
02/14/2019

Adversarially Approximated Autoencoder for Image Generation and Manipulation

Regularized autoencoders learn the latent codes, a structure with the re...
research
05/25/2023

Score-Based Multimodal Autoencoders

Multimodal Variational Autoencoders (VAEs) represent a promising group o...
research
06/09/2021

Pulling back information geometry

Latent space geometry has shown itself to provide a rich and rigorous fr...
research
10/28/2020

GENs: Generative Encoding Networks

Mapping data from and/or onto a known family of distributions has become...
research
12/21/2019

Deep Automodulators

We introduce a novel autoencoder model that deviates from traditional au...
research
09/30/2022

Relative representations enable zero-shot latent space communication

Neural networks embed the geometric structure of a data manifold lying i...

Please sign up or login with your details

Forgot password? Click here to reset