On Latent Distributions Without Finite Mean in Generative Models

06/05/2018
by   Damian Leśniak, et al.
0

We investigate the properties of multidimensional probability distributions in the context of latent space prior distributions of implicit generative models. Our work revolves around the phenomena arising while decoding linear interpolations between two random latent vectors -- regions of latent space in close proximity to the origin of the space are sampled causing distribution mismatch. We show that due to the Central Limit Theorem, this region is almost never sampled during the training process. As a result, linear interpolations may generate unrealistic data and their usage as a tool to check quality of the trained model is questionable. We propose to use multidimensional Cauchy distribution as the latent prior. Cauchy distribution does not satisfy the assumptions of the CLT and has a number of properties that allow it to work well in conjunction with linear interpolations. We also provide two general methods of creating non-linear interpolations that are easily applicable to a large family of common latent distributions. Finally we empirically analyze the quality of data generated from low-probability-mass regions for the DCGAN model on the CelebA dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 11

page 16

page 17

page 18

page 19

page 20

page 21

10/31/2017

Semantic Interpolation in Implicit Models

In implicit models, one often interpolates between sampled points in lat...
11/06/2017

Optimal transport maps for distribution preserving operations on latent spaces of Generative Models

Generative models such as Variational Auto Encoders (VAEs) and Generativ...
06/09/2021

Pulling back information geometry

Latent space geometry has shown itself to provide a rich and rigorous fr...
10/28/2020

GENs: Generative Encoding Networks

Mapping data from and/or onto a known family of distributions has become...
12/31/2020

Why do classifier accuracies show linear trends under distribution shift?

Several recent studies observed that when classification models are eval...
04/30/2020

Robustness Certification of Generative Models

Generative neural networks can be used to specify continuous transformat...
07/20/2020

Moment-Matching Graph-Networks for Causal Inference

In this note we explore a fully unsupervised deep-learning framework for...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Motivation and related work

Generative latent variable models have grown to be a very popular research topic, with Variational Auto-Encoders (VAEs) Kingma and Welling (2013) and Generative Adversarial Networks (GANs) Goodfellow et al. (2014) gaining a lot of research interest in the last few years. VAEs use a stochastic encoder

network to embed input data in a typically lower dimensional space, using a conditional probability distribution

over possible latent space codes . A stochastic decoder network is then used to reconstruct the original sample. GANs, on the other hand, use a generator network that creates data samples from noise samples , where is a fixed prior distribution, and train a discriminator network jointly to distinguish between real and "fake" (i.e. generated) data.

Both of those model families use a specific prior distribution on the latent space. In those models the latent codes aim to "explain" the underlying features of the real distribution

without explicit access to it. One would expect a well-trained probabilistic model to encode the properties of the data. Typical priors for those latent codes are the multidimensional standard Normal distribution

or uniform distribution on a hypercube 

.

A linear interpolation between two latent vectors is formally defined as a function

which may be understood as a traversal along the shortest path between these two endpoints. We are interested in decoding data for several values and inspecting how smooth the transition between the decoded data points is. Linear interpolations were utilized in previous work on generative models, mainly to show that the learned models do not overfit Kingma and Welling (2013); Goodfellow et al. (2014); Dumoulin et al. (2016) and that the latent space is able to capture the semantic content of the data Radford et al. (2015); Donahue et al. (2016). Linear interpolations can also be thought of as special case of vector algebra in the code space, similarly to the work done in word embeddings Mikolov et al. (2013).

While considered useful, linear interpolations used in conjunction with the most popular latent distributions are prone to traverse low probability mass regions. In high dimensions norms of vectors drawn from the latent distribution are concentrated around a certain value. Thus latent vectors are found near the surface of a sphere which results in the latent space distribution resembling a soap bubble fer . This is explained using the Central Limit Theorem (CLT), which we show in 2. Linear interpolations pass through inside of the sphere with high enough probability to drastically change the distribution of interpolated points in comparison to the prior distribution. This was reported Kilcher et al. (2017) to result in flawed data generation.

Some approaches to counteract this phenomena were proposed: (White, 2016) recommended using spherical interpolations to avoid traversing unlikely regions; Agustsson et al. (2017) suggest normalizing the norms of the points along the interpolation to match the prior distribution; (Kilcher et al., 2017) propose using a modified prior distribution, which saturates the origin of the latent space. (Arvanitidis et al., 2017) gives an interesting discussion on latent space traversal using the theory of Riemannian spaces.

1.2 Main contributions

Firstly, we propose to use the Cauchy distribution as the prior in generative models. This results in points along linear interpolations being distributed identically to those sampled from the prior. This is possible because Cauchy distributed noise does not satisfy the assumptions of the CLT.

Furthermore, we present two general ways of defining non-linear interpolations for a given latent distribution. Similarly, we are able to force points along interpolations to be distributed according to the prior.

Lastly, we show that the DCGAN Radford et al. (2015) model on the CelebA Liu et al. (2015) dataset is able to generate sensible images from the region near the supposedly "empty" origin of the latent space. This is contrary to what has been reported so far and we further empirically investigate this result by evaluating the model trained with specific pathological distributions.

1.3 Notations and mathematical conventions

The normal distribution with mean

and variance

is denoted by , the uniform distribution on the interval is denoted by , and the Cauchy distribution with location and scale is denoted by . If not stated otherwise, the normal distribution has mean zero and variance one, the uniform distribution is defined on the interval , and the Cauchy distribution has location zero and scale one.

The dimension of the latent space is denoted by .

Multidimensional random variables are written in bold, e.g.

. Lower indices denote coordinates of multidimensional random variables, e.g. . Upper indices denote independent samples from the same distribution, e.g. . If not stated otherwise, -dimensional distributions are defined as products of one-dimensional independent equal distributions.

The norm used in this work is always the Euclidean norm.

2 The Cauchy distribution

Let us assume that we want to train a generative model which has a -dimensional latent space and a fixed latent probability distribution defined by random variable . are the independent marginal distributions, and let denote a one-dimensional random variable distributed identically to every , where .

For example, if , then is distributed uniformly on the hypercube ; if , then is distributed according to the -dimensional normal distribution with mean and identity covariance matrix.

In the aforementioned cases we observe the so-called soap bubble phenomena – the values sampled from are concentrated close to a -dimensional sphere, contrary to the low-dimensional intuition.

Observation 2.1.

Let us assume that has finite mean and finite variance . Then approximates the normal distribution with mean and variance .

Sketch of proof.

Recall that . If are independent and distributed identically to , then are independent and distributed identically to . Using the central limit theorem we know that for large

from which it follows

and thus we can approximate the squared norm of as

Due to the nature of the convergence in distribution dividing or multiplying both sides by factors or that tend to infinity does not break the approximation.

The final step is to take the square root of both random variables. In proximity of , square root behaves approximately like scaling with constant . Additionally, has width proportional to , so we may apply affine transformation to the normal distribution to approximate the square root for large D, which in the end gives us:

An application of this observation to the two most common latent space distributions:

  • if , then

    has moments

    , , thus ,

  • if , then has moments , , thus .

It is worth noting that the variance of the norm does not depend on , which means that the distribution does not converge to the uniform distribution on the sphere of radius . Another fact worth noting is the observation that the -dimensional normal distribution with identity covariance matrix is isotropic, hence this distribution resembles the uniform distribution on a sphere. On the other hand, the uniform distribution on the hypercube is concentrated in close proximity to the surface of the sphere, but has regions of high density corresponding to directions defined by the hypercube’s vertices.

Now let us assume that we want to randomly draw two latent samples and interpolate linearly between them. We denote the two independent draws by and . Let us examine the distribution of the random variable . is the distribution of the middle points of a linear interpolation between two vectors drawn independently from . If the generative model was trained on noise sampled from and if the distribution of differs from , then data decoded from samples drawn from might be unrealistic, as such samples were never seen during training. One way to prevent this issue is to find such that is distributed identically to .

Observation 2.2.

If has a finite mean and are identically distributed, then must be concentrated at a single point.

Sketch of proof.

Using induction on we can show that for all the average of independent samples from is distributed equally to . On the other hand if , then the average distribution tends to . Thus must be concentrated at . ∎

There have been attempts to find with finite mean such that is at least similar to  Kilcher et al. (2017)

, where similarity was measured with Kullback-Leibler divergence between the distributions. We extend this idea by using a specific distribution that has no finite mean, namely the multidimensional Cauchy distribution.

Let us start with a short review of useful properties Cauchy distribution in of one-dimensional case. Let . Then:

  1. The probability density function of

    is equal to .

  2. For all moments of order greater than or equal to one are undefined. The location parameter should not be confused with the mean.

  3. If and are independent and distributed identically to , then is distributed identically to . Furthermore, if , then is also distributed identically to .

  4. If are independent and distributed identically to , and with , then is distributed identically to .

Those are well-known facts about the Cauchy distribution, and proving them is a common exercise in statistics textbooks. However, according to our best knowledge, the Cauchy distribution has never been used for in the context of generative models. With this in mind, the most important take-away is the following observation:

Observation 2.3.

If is distributed according to the -dimensional Cauchy distribution, then a linear interpolation between any number of latent points does not change the distribution.

Sketch of proof.

Let and . The variables are independent and distributed equally to . If are independent and distributed identically to , and with are fixed, then is distributed equally to for , thus is distributed equally to . ∎

We observed that the normal and uniform distributions are concentrated around a sphere with radius proportional to . On the other hand, the multidimensional Cauchy distribution fills the latent space. It should be noted that for the -dimensional Cauchy distribution the region near the origin of the latent space is empty – similarly to the normal and uniform distributions.

Figure 1 shows a comparison between approximations of density functions of for multidimensional normal, uniform and Cauchy distributions, and a distribution proposed by Kilcher et al. (2017).

Figure 1: Comparison of approximate distribution of Euclidean norms for samples with increasing dimensionality from different probability distributions.
Figure 2: Comparison of samples from DCGAN trained on the Cauchy distribution and one trained on the distribution proposed by Kilcher et al. (2017).

The one-dimensional Cauchy distribution has heavy tails, hence we can expect that one of coordinates will usually be sufficiently larger (by absolute value) than the others. This could potentially have negative impact on training of the GAN model, but we did not observe such difficulties. However, there is an obvious trade-off with using a distribution with heavy tails, as there will always be a number of samples with high enough norm. For those samples the generator will not be able to create sensible data points. A particular result of choosing a Cauchy distributed prior in GANs is the fact that during inference there will always be a number of "failed" generated data points due to latent vectors being sampled from the tails. Some of those faulty examples are presented in the appendix B. Figure 2 shows a set of samples from the DCGAN model trained on the CelebA dataset using the Cauchy and distribution from Kilcher et al. (2017) and Figure 3 shows linear interpolation on those two models.

Figure 3: Comparison of linear interpolations from DCGAN trained on the Cauchy distribution and one trained on the distribution proposed by Kilcher et al. (2017).

3 Interpolations

In this section we list current work on interpolations in high dimensional latent spaces in generative models. We present two methods that perform well with noise priors with finite first moments, i.e. the mean. Again, we define a linear interpolation between two points and as a function

In some cases we will use the term interpolation for the image of the function, as opposed to the function itself. We will list four properties an interpolation can have that we believe are important in the context of generative models:

Property 1. The interpolation should be continuous with respect to and .

Property 2. For every the interpolation should represent the shortest path between the two endpoints.

Property 3. If two points are a in the interpolation between and , then the whole interpolation from to should be included in the interpolation between and .

Property 4. If defines a distribution on the -dimensional latent space and are independent and distributed identically to , then for every the random variable should be distributed identically to .

The first property enforces that an interpolation should not make any jumps and that interpolations between pairs of similar endpoints should also be similar to each other. The second one is purposefully ambiguous. In absence of any additional information about the latent space it feels natural to use the Euclidean metric and assume that only the linear interpolation has this property. There has been some work on equipping the latent space with a stochastic Riemannian metric Arvanitidis et al. (2017) that additionally depends on the generator function. With such a metric the shortest path can be defined using geodesics. The third property is closely associated with the second one and codifies common-sense intuition about shortest paths. The fourth property is in our minds the most important desideratum of the linear interpolation, similarly to what Kilcher et al. (2017) stated. To understand these properties better, we will now analyze the following interpolations.

3.1 Linear interpolation

The linear interpolation is defined as

It obviously has properties 1-3. Satisfying property 4 is impossible for the most commonly used probability distributions, as they have finite mean, which was shown in observation 2.2.

3.2 Spherical linear interpolation

As in Shoemake (1985); White (2016), the spherical linear interpolation is defined as

where is the angle between vectors and .

This interpolation is continuous nearly everywhere (with the exception of antiparallel endpoint vectors) and satisfies property 3. It satisfies property 2 in the following sense: if vectors and have the same length , then the interpolation corresponds to a geodesic on the sphere of radius . Furthermore:

Observation 3.1.

Property 4 is satisfied if has uniform distribution on the zero-centered sphere of radius .

Sketch of proof.

Let and let be concentrated on the zero-centered sphere. The distribution of all pairs sampled from is identical to the product of two uniform distributions on the sphere, thus invariant to all isometries of the sphere. Then also must be invariant to all isometries, and the only probability distribution having this property is the uniform distribution. ∎

3.3 Normalized interpolation

Introduced in Agustsson et al. (2017), the normalized111Originally referred to as distribution matched. interpolation is defined as

It satisfies property 1, but neither property 2 nor 3, which can be easily shown in the extreme case of . As for property 4:

Observation 3.2.

The normalized interpolation satisfies property 4 if  .

Sketch of proof.

Let . The random variables and are both distributed according to . Then, using elementary properties of the normal distribution:

If vectors and are orthogonal and have equal length, then this interpolation is equal to the spherical linear interpolation from the previous section.

3.4 Cauchy-linear interpolation.

Here we present a general way of designing interpolations that satisfy properties 1, 3, and 4. Let:

  • be the -dimensional latent space,

  • define the probability distribution on the latent space,

  • be distributed according to the -dimensional Cauchy distribution on ,

  • be a subset of such that all mass of is concentrated on this set,

  • be a bijection such that be identically distributed as on .

Then for we define the Cauchy-linear interpolation as

In other words, for endpoints :

  1. Transform and using .

  2. Linearly interpolate between the transformations to get for all .

  3. Transform back to the original space using .

With some additional assumptions we can define as , where

is the inverse of the cumulative distribution function (CDF) of the Cauchy distribution, and

is the CDF of the original distribution . If additionally is distributed identically to the product of independent one-dimensional distributions, then we can use this formula coordinate-wise.

Observation 3.3.

With the above assumptions the Cauchy-linear interpolation satisfies property 4.

Sketch of proof.

Let . First observe that and are independent and distributed identically to . Likewise, . By the assumption on we have .

3.5 Spherical Cauchy-linear interpolation.

We might want to enforce the interpolation to have some other desired properties. For example: to behave exactly as the spherical linear interpolation, if only the endpoints have equal norm. For that purpose we require additional assumptions. Let:

  • be distributed isotropically,

  • be distributed according to the one-dimensional Cauchy distribution,

  • be a bijection such that is distributed identically as on .

Then we can modify the spherical linear interpolation formula to define what we call the spherical Cauchy-linear interpolation:

where is the angle between vectors and . In other words:

  1. Interpolate the directions of latent vectors using spherical linear interpolation.

  2. Interpolate the norms using Cauchy-linear interpolation from previous section.

Again, with some additional assumptions we can define as . For example: let be a -dimensional normal distribution with zero mean and identity covariance matrix. Then and

Thus we set , with .

Observation 3.4.

With the assumptions as above, the spherical Cauchy-linear interpolation satisfies property 4.

Sketch of proof.

We will use the fact that two isotropic probability distributions are equal if distributions of their euclidean norms are equal. The following holds:

  • All the following random variables are independent: .

  • and are both are distributed identically to .

  • and are both distributed uniformly on the sphere of radius .

Let . Note that

(1)

is uniformly distributed on the sphere of radius , which is a property of spherical linear interpolation. The norm of is distributed according to which is independent of (1). Thus, we have shown that is isotropic.

For the equality of norm distributions we will use a property of Cauchy-linear interpolation: is distributed identically to . Thus norm of is distributed equally to .

Figure 4 shows comparison of Cauchy-linear and spherical Cauchy-linear interpolations on 2D plane for data points sampled from different distributions. Figure 5 shows the smoothness of Cauchy-linear interpolation and a comparison between all the aforementioned interpolations. We also compare the data samples decoded from the interpolations by the DCGAN model trained on the CelebA dataset; results are shown on Figure 6.

(a) Uniform
(b) Cauchy
(c) Normal
(d) Normal - Spherical
Figure 4: Visual comparison of Cauchy-linear interpolation (a, b, and c) for points sampled from different distributions and spherical Cauchy-linear (d).
(a)
(b)
Figure 5: Visual representation of interpolation property 1 shown for Cauchy-linear interpolation (a) and comparison between all considered interpolation methods for two points sampled from a normal distribution (b) on a 2D plane.

We will briefly list the conclusion of this chapter. Firstly, linear interpolation is the best choice if one does not care about the fourth property, i.e. interpolation samples being distributed identically to the end-points. Secondly, for every continuous distribution with additional assumptions we can define an interpolation that has a consistent distribution between endpoints and mid-samples, which will not satisfy property 2, i.e. will not be the shortest path in Euclidean space. Lastly, there exist distributions for which linear interpolation satisfies the fourth property, but those distributions cannot have finite mean.

To combine conclusions from the last two chapters: in our opinion there is a clear direction in which one would search for prior distributions for generative models. Namely, choosing those distribution for which the linear interpolation would satisfy all four properties listed above. On the other hand, in this chapter we have shown that if one would rather stick to the more popular prior distributions, it is fairly simple to define a nonlinear interpolation that would have consistent distributions between endpoints and midpoints.

Figure 6: Different interpolations on latent space of a GAN model trained on standard Normal distribution.

4 Filling the Void

In this section we investigate the claim that in close proximity to the origin of the latent space, generative models will generate unrealistic or faulty data Kilcher et al. (2017). We have tried different experimental settings and were somewhat unsuccessful in replicating this phenomena. Results of our experiments are summarized in Figure 7. Even in higher latent space dimensionality , the DCGAN model trained on the CelebA dataset was able to generate a face-like images, although with high amount of noise. We investigated this result furthermore and empirically concluded that the effect of filling

the origin of latent space emerges during late epochs of training. Figure 

8 shows linear interpolations through origin of the latent space throughout the training process.

Figure 7: Linear interpolations through origin of latent space for different experimental setups: a) uniform noise distribution on , b) uniform noise distribution on a sphere , c) fully connected layers, d) normal noise distribution with latent space dimensionality , e) Normal noise distribution with latent space dimensionality
Figure 8: Emergence of sensible samples decoded near the origin of latent space throughout the training process demonstrated in interpolations between opposite points from latent space.

Data samples generated from samples located strictly inside the high-probability-mass sphere may not be identically distributed as the samples used in training, but from the decoded result they seem to be on the manifold of data. On the other hand we observed that data generated using latent vectors with high norm, i.e. far outside the sphere, are unrealistic. This might be due to the architecture, specifically the exclusive use of ReLUactivation function. Because of that, input vectors with large norms will result in abnormally high activation just before final saturating nonlinearity (usually tanh or sigmoid function), which in turn will make the decoded images highly color-saturated. It seems unsurprising that the exact value of the biggest sensible norm of latent samples is related to norm of latent vectors seen during training.

The only possible explanation of the fact that model is able to generate sensible data from out-of-distribution samples is a very strong model prior, both architecture and training algorithm. We decided to empirically test strength of this prior in the experiment described below.

(a) Samples from train and test distributions.
(b) Linear interpolations between random points from train distribution.
Figure 9: Images generated from a DCGAN model trained on 100-dimensional normal distribution. Train samples (from the distribution used in training) and test samples (same as train distribution but with lower variance) (a) and interpolations between random points from the training distribution (b).

We trained a DCGAN model on the CelebA dataset using a set of different noise distributions, all of which should suffer from the aforementioned empty region in the origin of latent space. Afterwards, using those models, we generate data images decoded from out-of-distribution samples. We would not except to generate sensible images, as those latent samples should have never been seen during training. We visualize samples decoded from inside of the high-probability-mass sphere and linear interpolations traversing through it. We test a few different prior distributions: normal distribution , uniform distribution on hypercube , uniform distribution on sphere , Discrete uniform distribution on set . Experiment result are shown in Figure 9 with more in the appendix D.

(a) Samples from train and test distributions.
(b) Linear interpolations between random points from train distribution.
Figure 10: Images generated from a DCGAN model trained on sparse 100-dimensional normal distribution with . Train samples (from distribution used in training) and test samples (samples from a dense normal distributions with adjust norm) (a) and interpolations between random points from the training distribution (b).
Figure 11: Visualization of the sparse training distribution for , brown points being sampled from the train distribution, green from the test distribution.

It might be that the origin of the latent space is surrounded by the prior distribution, which may be the reason to generate sensible data from it. Thus we decided to test if the model still works if we train it on an explicitly sparse distribution. We designed a pathological prior distribution in which after sampling from a given distribution, e.g. , we randomly draw coordinates and set them all to zero. Figure 11 shows samples from such a distributions in 3-dimensional case. Again, we trained the DCGAN model and generated images using latent samples from the dense distribution, multiplying them beforehand by to keep the norms consistent with those used in training. Results are shown in Figure 10 with more in the appendix.

Lastly, we wanted to check if the model is capable of generating sensible data from regions of latent space that are completely orthogonal to those seen during training. We created another pathological prior distribution, in which, after sampling from a given distribution (e.g. ) we set the last coordinates of each point to zero. As before, we trained the DCGAN model and generated images using samples from original distribution but this time with first dimensions set to zero. Results are shown in Figure 12 with more in the appendix.

Figure 12: Images decoded from the DCGAN model trained on a pathological 100-dimensional prior distribution with 50 last coordinates set to zero for different latent samples: a) orthogonal test distribution with 50 first coordinates set to 0, b) train distribution c) samples from train and test added element-wise d) samples from test multiplied by the factor of 3 and train added element-wise.

To conclude our experiments we briefly remark on the results. For the first experiment, images generated from the test distribution are clearly meaningful despite being decoded from samples from low-probability-mass regions. One thing to note is the fact that they are clearly less diverse than those from training distribution.

Decoded images from our second experiment are in our opinion similar enough between the train and test distribution to conclude that the DCGAN model does not suffer from training on a sparse latent distribution and is able to generalize without any additional mechanisms.

Out last experiment shows that while the DCGAN is still able to generate sensible images from regions orthogonal to the space seen during training, those regions still impact generative power and may lead to unrealistic data if they increase the activation in the network enough.

We observed that problems with hollow latent space and linear interpolations might be caused by stopping the training early or using a weak model architecture. This leads us to conclusion that one needs to be very careful when comparing effectiveness of different latent probability distributions and interpolation methods.

5 Summary

We investigated the properties of multidimensional probability distributions in context of latent noise distribution of generative models. Especially we looked for pairs of distribution-interpolation, where the distributions of interpolation endpoints and midpoints are identical.

We have shown that using -dimensional Cauchy distribution as latent probability distribution makes linear interpolations between any number of latent points hold that consistency property. We have also shown that for popular priors with finite mean, it is impossible to have linear interpolations that will satisfy the above property. We argue that one can fairly easily find a non-linear interpolation that will satisfy this property, which makes a search for such interpolation less interesting. Those results are formal and should work for every generative model with fixed distribution on latent space. Although, as we have shown, Cauchy distribution comes with few useful theoretical properties, it is still perfectly fine to use normal or uniform distribution, as long as the model is powerful enough.

We also observed empirically that DCGANs, if trained long enough, are capable of generating sensible data from latent samples coming out-of-distribution. We have tested several pathological cases of latent priors to give a glimpse of what the model is capable of. At this moment we are unable to explain this phenomena and point to it as a very interesting future work direction.

References

Appendix A Experimental setup

All experiments are run using DCGAN model, the generator network consists of a linear layer with 8192 neurons, follow by four convolution transposition layers, each using

filters and strides of 2 with number of filters in order of layers: 256, 128, 64, 3. Except the output layer where

tanh function activation is used, all previous layers use ReLU. Discriminator’s architecture mirrors the one from the generator with a single exception of using leaky ReLU instead of vanilla ReLU

function for all except the last layer. No batch normalization is used in both networks. Adam optimizer with learning rate of

and momentum set to . Batch size 64 is used throughout all experiments. If not explicitly stated otherwise, latent space dimension is 100 and the noise is sampled from a multidimensional normal distribution . For the CelebA dataset we resize the input images to . The code to reproduce all our experiments is available at: coming soon!

Appendix B Cauchy distribution - samples and interpolations

Figure 13: Generated images from samples from Cauchy distribution with occasional "failed" images from tails of the distribution.
Figure 14: Generated images from samples from Cauchy distribution with different latent space dimensionality.
Figure 15: Linear interpolations between random points on a GAN trained on Cauchy distribution.
Figure 16: Linear interpolations between opposite points on a GAN trained on Cauchy distribution.
Figure 17: Linear interpolations between hand-picked points from tails of the Cauchy distribution.

Appendix C More Cauchy-linear and spherical Cauchy-linear interpolations

Figure 18: Cauchy-linear interpolations between opposite points on a GAN trained on Normal distribution.
Figure 19: Cauchy-linear interpolations between random points on a GAN trained on Normal distribution.
Figure 20: Spherical Cauchy-linear interpolations between random points on a GAN trained on Normal distribution.

Appendix D More experiments with hollow latent space

(a) Samples from train and test distributions.
(b) Linear interpolations between random points from train distribution.
Figure 21: Images generated from a DCGAN model trained on 100-dimensional uniform distribution on hypercube . Train samples (from the distribution used in training) and test samples (same as train distribution but multiplied by ) (a) and interpolations between random points from the training distribution (b).
(a) Samples from train and test distributions.
(b) Linear interpolations between random points from train distribution.
Figure 22: Images generated from a DCGAN model trained on 100-dimensional uniform distribution on sphere . Train samples (from the distribution used in training) and test samples (same as train distribution but multiplied by ) (a) and interpolations between random points from the training distribution (b).
(a) Samples from train and test distributions.
(b) Linear interpolations between random points from train distribution.
Figure 23: Images generated from a DCGAN model trained on 100-dimensional discrete distribution . Train samples (from the distribution used in training) and test samples (same as train distribution but multiplied by ) (a) and interpolations between random points from the training distribution (b).