On Latent Distributions Without Finite Mean in Generative Models

06/05/2018
by   Damian Leśniak, et al.
0

We investigate the properties of multidimensional probability distributions in the context of latent space prior distributions of implicit generative models. Our work revolves around the phenomena arising while decoding linear interpolations between two random latent vectors -- regions of latent space in close proximity to the origin of the space are sampled causing distribution mismatch. We show that due to the Central Limit Theorem, this region is almost never sampled during the training process. As a result, linear interpolations may generate unrealistic data and their usage as a tool to check quality of the trained model is questionable. We propose to use multidimensional Cauchy distribution as the latent prior. Cauchy distribution does not satisfy the assumptions of the CLT and has a number of properties that allow it to work well in conjunction with linear interpolations. We also provide two general methods of creating non-linear interpolations that are easily applicable to a large family of common latent distributions. Finally we empirically analyze the quality of data generated from low-probability-mass regions for the DCGAN model on the CelebA dataset.

READ FULL TEXT

page 10

page 11

page 16

page 17

page 18

page 19

page 20

page 21

research
10/31/2017

Semantic Interpolation in Implicit Models

In implicit models, one often interpolates between sampled points in lat...
research
11/06/2017

Optimal transport maps for distribution preserving operations on latent spaces of Generative Models

Generative models such as Variational Auto Encoders (VAEs) and Generativ...
research
06/09/2021

Pulling back information geometry

Latent space geometry has shown itself to provide a rich and rigorous fr...
research
10/28/2020

GENs: Generative Encoding Networks

Mapping data from and/or onto a known family of distributions has become...
research
08/21/2023

Sampling From Autoencoders' Latent Space via Quantization And Probability Mass Function Concepts

In this study, we focus on sampling from the latent space of generative ...
research
12/31/2020

Why do classifier accuracies show linear trends under distribution shift?

Several recent studies observed that when classification models are eval...
research
10/12/2022

Auto-Encoding Goodness of Fit

For generative autoencoders to learn a meaningful latent representation ...

Please sign up or login with your details

Forgot password? Click here to reset