# Embedding-reparameterization procedure for manifold-valued latent variables in generative models

Conventional prior for Variational Auto-Encoder (VAE) is a Gaussian distribution. Recent works demonstrated that choice of prior distribution affects learning capacity of VAE models. We propose a general technique (embedding-reparameterization procedure, or ER) for introducing arbitrary manifold-valued variables in VAE model. We compare our technique with a conventional VAE on a toy benchmark problem. This is work in progress.

## Authors

• 2 publications
• 3 publications
• ### Hyperspherical Variational Auto-Encoders

The Variational Auto-Encoder (VAE) is one of the most used unsupervised ...
04/03/2018 ∙ by Tim R. Davidson, et al. ∙ 0

• ### Quaternion-Valued Variational Autoencoder

Deep probabilistic generative models have achieved incredible success in...
10/22/2020 ∙ by Eleonora Grassucci, et al. ∙ 0

• ### Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE

The ability to record activities from hundreds of neurons simultaneously...
11/09/2020 ∙ by Ding Zhou, et al. ∙ 13

• ### A Tutorial on VAEs: From Bayes' Rule to Lossless Compression

The Variational Auto-Encoder (VAE) is a simple, efficient, and popular d...
06/18/2020 ∙ by Ronald Yu, et al. ∙ 0

• ### Authoring image decompositions with generative models

We show how to extend traditional intrinsic image decompositions to inco...
12/05/2016 ∙ by Jason Rock, et al. ∙ 0

• ### Nonparametric Topic Modeling with Neural Inference

This work focuses on combining nonparametric topic models with Auto-Enco...
06/18/2018 ∙ by Xuefei Ning, et al. ∙ 0

• ### Rodent: Relevance determination in ODE

From a set of observed trajectories of a partially observed system, we a...
12/02/2019 ∙ by Niklas Heim, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Variational Auto-Encoder (VAE) vae and Generative Adversarial Networks (GAN) gan-orig show good performance in modelling real-world data such as images well. The key idea of both frameworks is to map a simple distribution (typically Gaussian) of lower dimension to a high-dimensional observation space by a complex non-linear function (typically neural network). Most of research efforts are concentrated on the enhancement of training procedure and neural architectures giving rise to a variety of elegant extensions for VAE and GANs gan-overview .

We consider prior distribution that is mapped to data distribution as one of design choices when building generative model. Its importance is highlighted in a number of works elbo-surgery ; s-vae18 ; homeo-vae ; sphere-nlp . Although s-vae18 provides an extensive overview of usage of -normalized latent variables (points lying on a hypersphere); this is clearly just one of the possible design choices for prior distribution in generative model.

Recent works s-vae18 ; homeo-vae ; sphere-nlp

argued that manifold hypothesis for data

belkin

provides evidence in favor of using more complicated priors than Gaussian, for which the topology of latent space matches that of the data. The above mentioned works derived analytic formulas for reparameterization of probability density on manifold (hypersphere in

s-vae18 and Lie group in homeo-vae ).

A somewhat less rigorous argument in favor of using manifold-valued latent variables is that we can represent generative process for data as having two sources of variation (see Figure 1

): one is uniform sampling from a group of transformations that we consider as compact symmetry groups (for example group of rotations) and another one is all the rest. This favors the choice of such topology of the latent space that would match "real" generative process: choose uniform distribution on some compact symmetry group as a prior distribution for latent variables.

Once a universal procedure for fast prototyping of VAE with different manifold-valued variables is available, such VAE can be used for estimating the likelihood integral

(for example using IWAE estimate iwae ) and thus make conclusions about latent symmetries that are present in the data. This was one of the key motivations for the current work.

All of above brings to the focus the case of continuously differentiable symmetry groups (Lie groups), which is a special case of manifold-valued latent variables.

## 2 Manifold-valued latent variables

Let us make the following preliminary assumption:

Data are generated as on Figure 1 with Lie group embedded in and there is a continuous mapping .

When using images as a test bed it implies that images generated by "close" symmetry elements (say two similar rotation angles and ) are also close in the pixel space. It justifies using additional tricks such as continuity loss homeo-vae for training VAE with manifold-valued latent variables.

### 2.1 Construction of VAE

Recall the optimization problem for VAE vae :

 L(ϕ,ψ)=Ex∼D[Ez∼qϕ(z|x)[logpψ(x|z)]−KL(qϕ(z|x)∥p(z))]→maxϕ,ψ,

where denotes the data distribution, is a posterior distribution on latent space , is the corresponding prior, and is the likelihood of a data point given . In order to construct a VAE with manifold-valued latent variables, we need the following:

1. An encoder that produces the posterior distribution from a parametric family of distributions on a manifold.

2. An ability to sample from this posterior distribution: .

3. An ability to compute KL-divergence between this posterior and a given prior.

Recent works s-vae18 ; homeo-vae proposed approaches to working with manifold-valued latent variables that are similar in spirit to ours: they derive a reparameterization of probability density defined on smooth manifold and use it in VAE. Problem is that such derivation appears to be complicated and needs to be done for all manifolds of interest.

Our approach is the following. First of all, we introduce a hidden latent space , such that , where is our manifold lying in a latent space of dimension . Let be a prior distribution on .

Suppose then, we have an embedding , so that . Being an embedding requires to be a diffeomorphism with its image, in particular, should be a differentiable injective map. We also pose an additional constraint on : it should map the prior on to a prior on the manifold ; in other words, if , then .

Using this embedding , we can construct a VAE with manifold-valued latent variables as depicted on the right part of Figure 2. In this case the posterior distribution on together with the embedding induce a posterior distribution on . We then have to compute KL-divergence between this induced posterior and the prior on the manifold. Despite the fact that in this case the probability mass is concentrated on the manifold and hence the probability density on is degenerate, we can define the manifold probability densities and (see Appendix 5.1 for details). Moreover, the corresponding KL-divergence is equivalent to the KL-divergence between distributions defined on (Appendix 4.3):

 KL(qM(z|x)∥pM(z))=KL(q(zhid|x)∥p(zhid))

Hence the final optimization problem for model on the right part of Figure 2 becomes the following:

 L(ϕ,ψ)=Ex∼D[Ezhid∼qϕ(zhid|x)[logpψ(x|f(zhid))]−KL(qϕ(zhid|x)∥p(zhid))]→maxϕ,ψ,

where are parameters of VAE encoder, which encodes the object into space, and are parameters of VAE decoder which maps the manifold to data-manifold in feature space; is our data distribution.

Thereby working with probability distributions induced on manifold of interest is easy: both terms in VAE loss (reconstruction error and KL-divergence) are easily calculated in the original hidden space

that is further mapped on a manifold.

### 2.2 Learning manifold embedding

To apply the procedure described above, we have to construct an embedding . In order to do this, we propose the following procedure:

1. Sample data from (distribution on ).

2. Train Wasserstein Auto-Encoder (WAE) wae on the data from (feature space) and the latent space with the prior : see the left part of Figure 2.

3. Use the decoder of this trained WAE as our embedding function .

Our motivation is the following: since the dimension of latent space and the dimension of manifold are the same, the reconstruction term in WAE objective constraints its decoder to be an injective map. Since it is represented with a neural network, it is also differentiable. The objective of WAE learning also forces its decoder to map a prior distribution on a latent space (in our case, ) to a distribution of data to the feature space (in our case, ). Hence WAE decoder is an ideal candidate for an embedding .

At first glance the described model leaves quite similar questions as vanilla VAE: we "shifted" the complex task of learning non-homeomorphic manifolds of a different topology (latent space and data space) from the VAE decoder to sub-module of the same VAE but pretrained using WAE. Nevertheless, the procedure ensures better control over mapping to manifold and one can develop corresponding metrics to control the quality of mapping.

## 3 Introducing symmetries of latent manifold into encoder

Recall that in our scheme an encoder together with embedding induce a family of posterior distributions on ; let us call this family .

A natural requirement to is to have the same symmetries as has. Suppose we have a symmetry group of acting on , i.e.

 ∀z∈M,∀g∈Ggz∈M.

For example, if is an -dimensional sphere in , is a group of rotations . We require to also be a symmetry of also:

 ∀q∈Q,∀g∈G∃q′∈Q:∀z∈Mq′(gz)=q(z).

This means that if a symmetry of acts on samples from a distribution , we should get samples from another distribution from the same family . Note that we did not pose this requirement while training , hence it would not generally be satisfied. Therefore we have to symmetrize explicitly.

In order to do this we introduce a group action encoder , see Figure 3. This group action encoder produces an element of the symmetry group of , which further acts on a sample . This effectively enriches the posterior family with .

This procedure has close connection with homeomorphic VAE homeo-vae . Suppose our manifold is a compact Lie group. Then it is homeomorphic to its own symmetry group: . Then our group action-encoder is equivalent to of homeo-vae .

## 4 Experiments and conclusions

We followed the same experimental setup as for a toy task in paper s-vae18 , but without noise. 111Our code is available on GitHub: https://github.com/varenick/manifold_latent_vae Sampling of a batch from the dataset consisted of two steps:

1. We generated uniformly distributed points on a 1-dimensional unit sphere embedded in .

2. We applied a non-linear fixed transformation

implemented as a randomly initialized multilayer perceptron with one hidden layer of size 100 and ReLU nonlinearity. Xavier-uniform initialization scheme was applied to the hidden layer.

All models are VAEs with the posterior distribution (Beta on ), the prior distribution (uniform of ) and the likelihood (Gaussian on ). As for the reparameterization function , it was either WAE-MMD or the exact mapping from segment into a 1-dimensional circle ("Projection") in the first layer of decoder:

The dimensions of latent variables were either 1 or 2.

In a case when the group action encoder is used, it produces an angle (element of ), which is further used to rotate the sample .

The results are presented in Table 1. All decoder structures that include manifold mapping show better results than a vanilla VAE with 1-dimensional latent Gaussian space.

#### Acknowledgments

This work was supported by National Technology Initiative and PAO Sberbank project ID 0000000007417F630002.

## 5 Appendix

### 5.1 Probability density functions with manifold support

Suppose we have a probability distribution on with density and a diffeomorphism , where as well. Then, induces a probability distribution on with the following density:

 p(x)=p(f−1(x))|detJf(f−1(x))|−1=p(f−1(x))|detJf−1(x)|.

Suppose now that with , and is a smooth embedding (which requires to be a diffeomorphism between and ). From this follows that induces degenerate probability distribution on since all the probability mass in is concentrated on a manifold . The corresponding probability measure is trivial:

 P(f(A))=P(A)

for some event on . Although we cannot define a valid probability density of , we can define a manifold probability density on as follows:

where by we denote an -dimensional volume of ; let us define this volume. Let be an open subset of . Then its image under embedding is an open subset of a manifold (open in terms of the topology of ). If is a Euclidean space, than the "volume" of is given simply as:

 Vol(Ω)=∫Ωdz1…dzn.

Since is embedded into , and is a Euclidean space, we can measure an -dimensional "volume" of . It is given as:

 VolM(f(Ω))=∫f(Ω)√|detG(z)|dz1…dzn,

where

is a metric tensor on

, induced by the scalar product on and the embedding :

 Gij(z)=⟨df(z)dzi,df(z)dzj⟩.

Returning to our formula for probability density on , we now have:

Or,

 pM(x)=p(f−1(x))|detG(f−1(x))|−1/2.

### 5.2 Calculation of KL divergence in the case of normalizing flow

 KL(qM(z|x)∥pM(z))=Ez∼qM(z|x)(logqM(z|x)−logpM(z))=Ezhid∼q(zhid|x)(logq(zhid|x)+log|detJf(zhid)|−1−logp(zhid)−log|detJf(zhid)|−1)=Ezhid∼q(zhid|x)(logq(zhid|x)−logp(zhid))=KL(q(zhid|x)∥p(zhid)).

where is the posterior distribution (i.e. fully-factorized Gauss or Beta) on latent variables of WAE, which we use for manifold embedding, is the corresponding prior (i.e. standard Gauss or Uniform), is the decoder of the WAE, which we use to transform the latent space of WAE into manifold , and is the Jacobian of this transformation. As we see, log-determinants of Jacobians cancel out, and we are left with the KL-divergence on latent space of WAE.

### 5.3 Calculation of KL divergence in case of embedding map

 KL(qM(z|x)∥pM(z))=Ez∼qM(z|x)(logqM(z|x)−logpM(z))=Ezhid∼q(zhid|x)(logq(zhid|x)+log|detG(zhid)|−1/2−logp(zhid)−log|detG(zhid)|−1/2)=Ezhid∼q(zhid|x)(logq(zhid|x)−logp(zhid))=KL(q(zhid|x)∥p(zhid)),

where denotes the metric tensor of the embedding . As in Appendix 4.2, the corresponding terms cancel out.