Encoded Prior Sliced Wasserstein AutoEncoder for learning latent manifold representations

10/02/2020
by   Sanjukta Krishnagopal, et al.
0

While variational autoencoders have been successful generative models for a variety of tasks, the use of conventional Gaussian or Gaussian mixture priors are limited in their ability to capture topological or geometric properties of data in the latent representation. In this work, we introduce an Encoded Prior Sliced Wasserstein AutoEncoder (EPSWAE) wherein an additional prior-encoder network learns an unconstrained prior to match the encoded data manifold. The autoencoder and prior-encoder networks are iteratively trained using the Sliced Wasserstein Distance (SWD), which efficiently measures the distance between two arbitrary sampleable distributions without being constrained to a specific form as in the KL divergence, and without requiring expensive adversarial training. Additionally, we enhance the conventional SWD by introducing a nonlinear shearing, i.e., averaging over random nonlinear transformations, to better capture differences between two distributions. The prior is further encouraged to encode the data manifold by use of a structural consistency term that encourages isometry between feature space and latent space. Lastly, interpolation along geodesics on the latent space representation of the data manifold generates samples that lie on the manifold and hence is advantageous compared with standard Euclidean interpolation. To this end, we introduce a graph-based algorithm for identifying network-geodesics in latent space from samples of the prior that maximize the density of samples along the path while minimizing total energy. We apply our framework to 3D-spiral, MNIST, and CelebA datasets, and show that its latent representations and interpolations are comparable to the state of the art on equivalent architectures.

READ FULL TEXT

page 7

page 8

page 14

page 15

research
04/05/2018

Sliced-Wasserstein Autoencoder: An Embarrassingly Simple Generative Model

In this paper we study generative modeling via autoencoders while using ...
research
01/25/2019

Diffusion Variational Autoencoders

A standard Variational Autoencoder, with a Euclidean latent space, is st...
research
01/05/2019

Poincaré Wasserstein Autoencoder

This work presents a reformulation of the recently proposed Wasserstein ...
research
12/23/2014

Learning Non-deterministic Representations with Energy-based Ensembles

The goal of a generative model is to capture the distribution underlying...
research
12/05/2020

Joint Estimation of Image Representations and their Lie Invariants

Images encode both the state of the world and its content. The former is...
research
02/12/2020

Uniform Interpolation Constrained Geodesic Learning on Data Manifold

In this paper, we propose a method to learn a minimizing geodesic within...
research
08/21/2023

Sampling From Autoencoders' Latent Space via Quantization And Probability Mass Function Concepts

In this study, we focus on sampling from the latent space of generative ...

Please sign up or login with your details

Forgot password? Click here to reset