Embedding-reparameterization procedure for manifold-valued latent variables in generative models

12/06/2018
by   Eugene Golikov, et al.
0

Conventional prior for Variational Auto-Encoder (VAE) is a Gaussian distribution. Recent works demonstrated that choice of prior distribution affects learning capacity of VAE models. We propose a general technique (embedding-reparameterization procedure, or ER) for introducing arbitrary manifold-valued variables in VAE model. We compare our technique with a conventional VAE on a toy benchmark problem. This is work in progress.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2018

Hyperspherical Variational Auto-Encoders

The Variational Auto-Encoder (VAE) is one of the most used unsupervised ...
research
10/22/2020

Quaternion-Valued Variational Autoencoder

Deep probabilistic generative models have achieved incredible success in...
research
07/12/2018

Explorations in Homeomorphic Variational Auto-Encoding

The manifold hypothesis states that many kinds of high-dimensional data ...
research
11/09/2020

Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE

The ability to record activities from hundreds of neurons simultaneously...
research
12/05/2016

Authoring image decompositions with generative models

We show how to extend traditional intrinsic image decompositions to inco...
research
06/18/2018

Nonparametric Topic Modeling with Neural Inference

This work focuses on combining nonparametric topic models with Auto-Enco...
research
05/31/2019

On the Necessity and Effectiveness of Learning the Prior of Variational Auto-Encoder

Using powerful posterior distributions is a popular approach to achievin...

Please sign up or login with your details

Forgot password? Click here to reset