DeepAI AI Chat
Log In Sign Up

On Latent Distributions Without Finite Mean in Generative Models

by   Damian Leśniak, et al.

We investigate the properties of multidimensional probability distributions in the context of latent space prior distributions of implicit generative models. Our work revolves around the phenomena arising while decoding linear interpolations between two random latent vectors -- regions of latent space in close proximity to the origin of the space are sampled causing distribution mismatch. We show that due to the Central Limit Theorem, this region is almost never sampled during the training process. As a result, linear interpolations may generate unrealistic data and their usage as a tool to check quality of the trained model is questionable. We propose to use multidimensional Cauchy distribution as the latent prior. Cauchy distribution does not satisfy the assumptions of the CLT and has a number of properties that allow it to work well in conjunction with linear interpolations. We also provide two general methods of creating non-linear interpolations that are easily applicable to a large family of common latent distributions. Finally we empirically analyze the quality of data generated from low-probability-mass regions for the DCGAN model on the CelebA dataset.


page 10

page 11

page 16

page 17

page 18

page 19

page 20

page 21


Semantic Interpolation in Implicit Models

In implicit models, one often interpolates between sampled points in lat...

Optimal transport maps for distribution preserving operations on latent spaces of Generative Models

Generative models such as Variational Auto Encoders (VAEs) and Generativ...

Pulling back information geometry

Latent space geometry has shown itself to provide a rich and rigorous fr...

GENs: Generative Encoding Networks

Mapping data from and/or onto a known family of distributions has become...

Why do classifier accuracies show linear trends under distribution shift?

Several recent studies observed that when classification models are eval...

Transflow Learning: Repurposing Flow Models Without Retraining

It is well known that deep generative models have a rich latent space, a...

Rayleigh EigenDirections (REDs): GAN latent space traversals for multidimensional features

We present a method for finding paths in a deep generative model's laten...