1.1 Motivation and related work
Generative latent variable models have grown to be a very popular research topic, with Variational Auto-Encoders (VAEs) Kingma and Welling (2013) and Generative Adversarial Networks (GANs) Goodfellow et al. (2014) gaining a lot of research interest in the last few years. VAEs use a stochastic encoder
network to embed input data in a typically lower dimensional space, using a conditional probability distributionover possible latent space codes . A stochastic decoder network is then used to reconstruct the original sample. GANs, on the other hand, use a generator network that creates data samples from noise samples , where is a fixed prior distribution, and train a discriminator network jointly to distinguish between real and "fake" (i.e. generated) data.
Both of those model families use a specific prior distribution on the latent space. In those models the latent codes aim to "explain" the underlying features of the real distribution
without explicit access to it. One would expect a well-trained probabilistic model to encode the properties of the data. Typical priors for those latent codes are the multidimensional standard Normal distribution
or uniform distribution on a hypercube.
A linear interpolation between two latent vectors is formally defined as a function
which may be understood as a traversal along the shortest path between these two endpoints. We are interested in decoding data for several values and inspecting how smooth the transition between the decoded data points is. Linear interpolations were utilized in previous work on generative models, mainly to show that the learned models do not overfit Kingma and Welling (2013); Goodfellow et al. (2014); Dumoulin et al. (2016) and that the latent space is able to capture the semantic content of the data Radford et al. (2015); Donahue et al. (2016). Linear interpolations can also be thought of as special case of vector algebra in the code space, similarly to the work done in word embeddings Mikolov et al. (2013).
While considered useful, linear interpolations used in conjunction with the most popular latent distributions are prone to traverse low probability mass regions. In high dimensions norms of vectors drawn from the latent distribution are concentrated around a certain value. Thus latent vectors are found near the surface of a sphere which results in the latent space distribution resembling a soap bubble fer . This is explained using the Central Limit Theorem (CLT), which we show in 2. Linear interpolations pass through inside of the sphere with high enough probability to drastically change the distribution of interpolated points in comparison to the prior distribution. This was reported Kilcher et al. (2017) to result in flawed data generation.
Some approaches to counteract this phenomena were proposed: (White, 2016) recommended using spherical interpolations to avoid traversing unlikely regions; Agustsson et al. (2017) suggest normalizing the norms of the points along the interpolation to match the prior distribution; (Kilcher et al., 2017) propose using a modified prior distribution, which saturates the origin of the latent space. (Arvanitidis et al., 2017) gives an interesting discussion on latent space traversal using the theory of Riemannian spaces.
1.2 Main contributions
Firstly, we propose to use the Cauchy distribution as the prior in generative models. This results in points along linear interpolations being distributed identically to those sampled from the prior. This is possible because Cauchy distributed noise does not satisfy the assumptions of the CLT.
Furthermore, we present two general ways of defining non-linear interpolations for a given latent distribution. Similarly, we are able to force points along interpolations to be distributed according to the prior.
Lastly, we show that the DCGAN Radford et al. (2015) model on the CelebA Liu et al. (2015) dataset is able to generate sensible images from the region near the supposedly "empty" origin of the latent space. This is contrary to what has been reported so far and we further empirically investigate this result by evaluating the model trained with specific pathological distributions.
1.3 Notations and mathematical conventions
The normal distribution with mean
and varianceis denoted by , the uniform distribution on the interval is denoted by , and the Cauchy distribution with location and scale is denoted by . If not stated otherwise, the normal distribution has mean zero and variance one, the uniform distribution is defined on the interval , and the Cauchy distribution has location zero and scale one.
The dimension of the latent space is denoted by .
Multidimensional random variables are written in bold, e.g.. Lower indices denote coordinates of multidimensional random variables, e.g. . Upper indices denote independent samples from the same distribution, e.g. . If not stated otherwise, -dimensional distributions are defined as products of one-dimensional independent equal distributions.
The norm used in this work is always the Euclidean norm.
2 The Cauchy distribution
Let us assume that we want to train a generative model which has a -dimensional latent space and a fixed latent probability distribution defined by random variable . are the independent marginal distributions, and let denote a one-dimensional random variable distributed identically to every , where .
For example, if , then is distributed uniformly on the hypercube ; if , then is distributed according to the -dimensional normal distribution with mean and identity covariance matrix.
In the aforementioned cases we observe the so-called soap bubble phenomena – the values sampled from are concentrated close to a -dimensional sphere, contrary to the low-dimensional intuition.
Let us assume that has finite mean and finite variance . Then approximates the normal distribution with mean and variance .
Sketch of proof.
Recall that . If are independent and distributed identically to , then are independent and distributed identically to . Using the central limit theorem we know that for large
from which it follows
and thus we can approximate the squared norm of as
Due to the nature of the convergence in distribution dividing or multiplying both sides by factors or that tend to infinity does not break the approximation.
The final step is to take the square root of both random variables. In proximity of , square root behaves approximately like scaling with constant . Additionally, has width proportional to , so we may apply affine transformation to the normal distribution to approximate the square root for large D, which in the end gives us:
An application of this observation to the two most common latent space distributions:
if , then
has moments, , thus ,
if , then has moments , , thus .
It is worth noting that the variance of the norm does not depend on , which means that the distribution does not converge to the uniform distribution on the sphere of radius . Another fact worth noting is the observation that the -dimensional normal distribution with identity covariance matrix is isotropic, hence this distribution resembles the uniform distribution on a sphere. On the other hand, the uniform distribution on the hypercube is concentrated in close proximity to the surface of the sphere, but has regions of high density corresponding to directions defined by the hypercube’s vertices.
Now let us assume that we want to randomly draw two latent samples and interpolate linearly between them. We denote the two independent draws by and . Let us examine the distribution of the random variable . is the distribution of the middle points of a linear interpolation between two vectors drawn independently from . If the generative model was trained on noise sampled from and if the distribution of differs from , then data decoded from samples drawn from might be unrealistic, as such samples were never seen during training. One way to prevent this issue is to find such that is distributed identically to .
If has a finite mean and are identically distributed, then must be concentrated at a single point.
Sketch of proof.
Using induction on we can show that for all the average of independent samples from is distributed equally to . On the other hand if , then the average distribution tends to . Thus must be concentrated at . ∎
There have been attempts to find with finite mean such that is at least similar to Kilcher et al. (2017)
, where similarity was measured with Kullback-Leibler divergence between the distributions. We extend this idea by using a specific distribution that has no finite mean, namely the multidimensional Cauchy distribution.
Let us start with a short review of useful properties Cauchy distribution in of one-dimensional case. Let . Then:
The probability density function ofis equal to .
For all moments of order greater than or equal to one are undefined. The location parameter should not be confused with the mean.
If and are independent and distributed identically to , then is distributed identically to . Furthermore, if , then is also distributed identically to .
If are independent and distributed identically to , and with , then is distributed identically to .
Those are well-known facts about the Cauchy distribution, and proving them is a common exercise in statistics textbooks. However, according to our best knowledge, the Cauchy distribution has never been used for in the context of generative models. With this in mind, the most important take-away is the following observation:
If is distributed according to the -dimensional Cauchy distribution, then a linear interpolation between any number of latent points does not change the distribution.
Sketch of proof.
Let and . The variables are independent and distributed equally to . If are independent and distributed identically to , and with are fixed, then is distributed equally to for , thus is distributed equally to . ∎
We observed that the normal and uniform distributions are concentrated around a sphere with radius proportional to . On the other hand, the multidimensional Cauchy distribution fills the latent space. It should be noted that for the -dimensional Cauchy distribution the region near the origin of the latent space is empty – similarly to the normal and uniform distributions.
The one-dimensional Cauchy distribution has heavy tails, hence we can expect that one of coordinates will usually be sufficiently larger (by absolute value) than the others. This could potentially have negative impact on training of the GAN model, but we did not observe such difficulties. However, there is an obvious trade-off with using a distribution with heavy tails, as there will always be a number of samples with high enough norm. For those samples the generator will not be able to create sensible data points. A particular result of choosing a Cauchy distributed prior in GANs is the fact that during inference there will always be a number of "failed" generated data points due to latent vectors being sampled from the tails. Some of those faulty examples are presented in the appendix B. Figure 2 shows a set of samples from the DCGAN model trained on the CelebA dataset using the Cauchy and distribution from Kilcher et al. (2017) and Figure 3 shows linear interpolation on those two models.
In this section we list current work on interpolations in high dimensional latent spaces in generative models. We present two methods that perform well with noise priors with finite first moments, i.e. the mean. Again, we define a linear interpolation between two points and as a function
In some cases we will use the term interpolation for the image of the function, as opposed to the function itself. We will list four properties an interpolation can have that we believe are important in the context of generative models:
Property 1. The interpolation should be continuous with respect to and .
Property 2. For every the interpolation should represent the shortest path between the two endpoints.
Property 3. If two points are a in the interpolation between and , then the whole interpolation from to should be included in the interpolation between and .
Property 4. If defines a distribution on the -dimensional latent space and are independent and distributed identically to , then for every the random variable should be distributed identically to .
The first property enforces that an interpolation should not make any jumps and that interpolations between pairs of similar endpoints should also be similar to each other. The second one is purposefully ambiguous. In absence of any additional information about the latent space it feels natural to use the Euclidean metric and assume that only the linear interpolation has this property. There has been some work on equipping the latent space with a stochastic Riemannian metric Arvanitidis et al. (2017) that additionally depends on the generator function. With such a metric the shortest path can be defined using geodesics. The third property is closely associated with the second one and codifies common-sense intuition about shortest paths. The fourth property is in our minds the most important desideratum of the linear interpolation, similarly to what Kilcher et al. (2017) stated. To understand these properties better, we will now analyze the following interpolations.
3.1 Linear interpolation
The linear interpolation is defined as
It obviously has properties 1-3. Satisfying property 4 is impossible for the most commonly used probability distributions, as they have finite mean, which was shown in observation 2.2.
3.2 Spherical linear interpolation
where is the angle between vectors and .
This interpolation is continuous nearly everywhere (with the exception of antiparallel endpoint vectors) and satisfies property 3. It satisfies property 2 in the following sense: if vectors and have the same length , then the interpolation corresponds to a geodesic on the sphere of radius . Furthermore:
Property 4 is satisfied if has uniform distribution on the zero-centered sphere of radius .
Sketch of proof.
Let and let be concentrated on the zero-centered sphere. The distribution of all pairs sampled from is identical to the product of two uniform distributions on the sphere, thus invariant to all isometries of the sphere. Then also must be invariant to all isometries, and the only probability distribution having this property is the uniform distribution. ∎
3.3 Normalized interpolation
Introduced in Agustsson et al. (2017), the normalized111Originally referred to as distribution matched. interpolation is defined as
It satisfies property 1, but neither property 2 nor 3, which can be easily shown in the extreme case of . As for property 4:
The normalized interpolation satisfies property 4 if .
Sketch of proof.
Let . The random variables and are both distributed according to . Then, using elementary properties of the normal distribution:
If vectors and are orthogonal and have equal length, then this interpolation is equal to the spherical linear interpolation from the previous section.
3.4 Cauchy-linear interpolation.
Here we present a general way of designing interpolations that satisfy properties 1, 3, and 4. Let:
be the -dimensional latent space,
define the probability distribution on the latent space,
be distributed according to the -dimensional Cauchy distribution on ,
be a subset of such that all mass of is concentrated on this set,
be a bijection such that be identically distributed as on .
Then for we define the Cauchy-linear interpolation as
In other words, for endpoints :
Transform and using .
Linearly interpolate between the transformations to get for all .
Transform back to the original space using .
With some additional assumptions we can define as , where
is the inverse of the cumulative distribution function (CDF) of the Cauchy distribution, andis the CDF of the original distribution . If additionally is distributed identically to the product of independent one-dimensional distributions, then we can use this formula coordinate-wise.
With the above assumptions the Cauchy-linear interpolation satisfies property 4.
Sketch of proof.
Let . First observe that and are independent and distributed identically to . Likewise, . By the assumption on we have .
3.5 Spherical Cauchy-linear interpolation.
We might want to enforce the interpolation to have some other desired properties. For example: to behave exactly as the spherical linear interpolation, if only the endpoints have equal norm. For that purpose we require additional assumptions. Let:
be distributed isotropically,
be distributed according to the one-dimensional Cauchy distribution,
be a bijection such that is distributed identically as on .
Then we can modify the spherical linear interpolation formula to define what we call the spherical Cauchy-linear interpolation:
where is the angle between vectors and . In other words:
Interpolate the directions of latent vectors using spherical linear interpolation.
Interpolate the norms using Cauchy-linear interpolation from previous section.
Again, with some additional assumptions we can define as . For example: let be a -dimensional normal distribution with zero mean and identity covariance matrix. Then and
Thus we set , with .
With the assumptions as above, the spherical Cauchy-linear interpolation satisfies property 4.
Sketch of proof.
We will use the fact that two isotropic probability distributions are equal if distributions of their euclidean norms are equal. The following holds:
All the following random variables are independent: .
and are both are distributed identically to .
and are both distributed uniformly on the sphere of radius .
Let . Note that
is uniformly distributed on the sphere of radius , which is a property of spherical linear interpolation. The norm of is distributed according to which is independent of (1). Thus, we have shown that is isotropic.
For the equality of norm distributions we will use a property of Cauchy-linear interpolation: is distributed identically to . Thus norm of is distributed equally to .
Figure 4 shows comparison of Cauchy-linear and spherical Cauchy-linear interpolations on 2D plane for data points sampled from different distributions. Figure 5 shows the smoothness of Cauchy-linear interpolation and a comparison between all the aforementioned interpolations. We also compare the data samples decoded from the interpolations by the DCGAN model trained on the CelebA dataset; results are shown on Figure 6.
We will briefly list the conclusion of this chapter. Firstly, linear interpolation is the best choice if one does not care about the fourth property, i.e. interpolation samples being distributed identically to the end-points. Secondly, for every continuous distribution with additional assumptions we can define an interpolation that has a consistent distribution between endpoints and mid-samples, which will not satisfy property 2, i.e. will not be the shortest path in Euclidean space. Lastly, there exist distributions for which linear interpolation satisfies the fourth property, but those distributions cannot have finite mean.
To combine conclusions from the last two chapters: in our opinion there is a clear direction in which one would search for prior distributions for generative models. Namely, choosing those distribution for which the linear interpolation would satisfy all four properties listed above. On the other hand, in this chapter we have shown that if one would rather stick to the more popular prior distributions, it is fairly simple to define a nonlinear interpolation that would have consistent distributions between endpoints and midpoints.
4 Filling the Void
In this section we investigate the claim that in close proximity to the origin of the latent space, generative models will generate unrealistic or faulty data Kilcher et al. (2017). We have tried different experimental settings and were somewhat unsuccessful in replicating this phenomena. Results of our experiments are summarized in Figure 7. Even in higher latent space dimensionality , the DCGAN model trained on the CelebA dataset was able to generate a face-like images, although with high amount of noise. We investigated this result furthermore and empirically concluded that the effect of filling
the origin of latent space emerges during late epochs of training. Figure8 shows linear interpolations through origin of the latent space throughout the training process.
Data samples generated from samples located strictly inside the high-probability-mass sphere may not be identically distributed as the samples used in training, but from the decoded result they seem to be on the manifold of data. On the other hand we observed that data generated using latent vectors with high norm, i.e. far outside the sphere, are unrealistic. This might be due to the architecture, specifically the exclusive use of ReLUactivation function. Because of that, input vectors with large norms will result in abnormally high activation just before final saturating nonlinearity (usually tanh or sigmoid function), which in turn will make the decoded images highly color-saturated. It seems unsurprising that the exact value of the biggest sensible norm of latent samples is related to norm of latent vectors seen during training.
The only possible explanation of the fact that model is able to generate sensible data from out-of-distribution samples is a very strong model prior, both architecture and training algorithm. We decided to empirically test strength of this prior in the experiment described below.
We trained a DCGAN model on the CelebA dataset using a set of different noise distributions, all of which should suffer from the aforementioned empty region in the origin of latent space. Afterwards, using those models, we generate data images decoded from out-of-distribution samples. We would not except to generate sensible images, as those latent samples should have never been seen during training. We visualize samples decoded from inside of the high-probability-mass sphere and linear interpolations traversing through it. We test a few different prior distributions: normal distribution , uniform distribution on hypercube , uniform distribution on sphere , Discrete uniform distribution on set . Experiment result are shown in Figure 9 with more in the appendix D.
It might be that the origin of the latent space is surrounded by the prior distribution, which may be the reason to generate sensible data from it. Thus we decided to test if the model still works if we train it on an explicitly sparse distribution. We designed a pathological prior distribution in which after sampling from a given distribution, e.g. , we randomly draw coordinates and set them all to zero. Figure 11 shows samples from such a distributions in 3-dimensional case. Again, we trained the DCGAN model and generated images using latent samples from the dense distribution, multiplying them beforehand by to keep the norms consistent with those used in training. Results are shown in Figure 10 with more in the appendix.
Lastly, we wanted to check if the model is capable of generating sensible data from regions of latent space that are completely orthogonal to those seen during training. We created another pathological prior distribution, in which, after sampling from a given distribution (e.g. ) we set the last coordinates of each point to zero. As before, we trained the DCGAN model and generated images using samples from original distribution but this time with first dimensions set to zero. Results are shown in Figure 12 with more in the appendix.
To conclude our experiments we briefly remark on the results. For the first experiment, images generated from the test distribution are clearly meaningful despite being decoded from samples from low-probability-mass regions. One thing to note is the fact that they are clearly less diverse than those from training distribution.
Decoded images from our second experiment are in our opinion similar enough between the train and test distribution to conclude that the DCGAN model does not suffer from training on a sparse latent distribution and is able to generalize without any additional mechanisms.
Out last experiment shows that while the DCGAN is still able to generate sensible images from regions orthogonal to the space seen during training, those regions still impact generative power and may lead to unrealistic data if they increase the activation in the network enough.
We observed that problems with hollow latent space and linear interpolations might be caused by stopping the training early or using a weak model architecture. This leads us to conclusion that one needs to be very careful when comparing effectiveness of different latent probability distributions and interpolation methods.
We investigated the properties of multidimensional probability distributions in context of latent noise distribution of generative models. Especially we looked for pairs of distribution-interpolation, where the distributions of interpolation endpoints and midpoints are identical.
We have shown that using -dimensional Cauchy distribution as latent probability distribution makes linear interpolations between any number of latent points hold that consistency property. We have also shown that for popular priors with finite mean, it is impossible to have linear interpolations that will satisfy the above property. We argue that one can fairly easily find a non-linear interpolation that will satisfy this property, which makes a search for such interpolation less interesting. Those results are formal and should work for every generative model with fixed distribution on latent space. Although, as we have shown, Cauchy distribution comes with few useful theoretical properties, it is still perfectly fine to use normal or uniform distribution, as long as the model is powerful enough.
We also observed empirically that DCGANs, if trained long enough, are capable of generating sensible data from latent samples coming out-of-distribution. We have tested several pathological cases of latent priors to give a glimpse of what the model is capable of. At this moment we are unable to explain this phenomena and point to it as a very interesting future work direction.
-  Gaussian distributions are soap bubbles. http://www.inference.vc/high-dimensional-gaussian-distributions-are-soap-bubble/. Accessed: 2018-05-22.
- Agustsson et al.  Eirikur Agustsson, Alexander Sage, Radu Timofte, and Luc Van Gool. Optimal transport maps for distribution preserving operations on latent spaces of generative models. arXiv preprint arXiv:1711.01970, 2017.
- Arvanitidis et al.  Georgios Arvanitidis, Lars Kai Hansen, and Søren Hauberg. Latent space oddity: on the curvature of deep generative models. arXiv preprint arXiv:1710.11379, 2017.
- Donahue et al.  Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
- Dumoulin et al.  Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
- Goodfellow et al.  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
- Kilcher et al.  Yannic Kilcher, Aurelien Lucchi, and Thomas Hofmann. Semantic interpolation in implicit models. arXiv preprint arXiv:1710.11381, 2017.
- Kingma and Welling  Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Liu et al. 
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.
Deep learning face attributes in the wild.
Proceedings of the IEEE International Conference on Computer Vision, pages 3730–3738, 2015.
- Mikolov et al.  Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
- Radford et al.  Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
- Shoemake  Ken Shoemake. Animating rotation with quaternion curves. In ACM SIGGRAPH computer graphics, volume 19, pages 245–254. ACM, 1985.
- White  Tom White. Sampling generative networks: Notes on a few effective techniques. arXiv preprint arXiv:1609.04468, 2016.
Appendix A Experimental setup
All experiments are run using DCGAN model, the generator network consists of a linear layer with 8192 neurons, follow by four convolution transposition layers, each using
filters and strides of 2 with number of filters in order of layers: 256, 128, 64, 3. Except the output layer wheretanh function activation is used, all previous layers use ReLU. Discriminator’s architecture mirrors the one from the generator with a single exception of using leaky ReLU instead of vanilla ReLU
function for all except the last layer. No batch normalization is used in both networks. Adam optimizer with learning rate ofand momentum set to . Batch size 64 is used throughout all experiments. If not explicitly stated otherwise, latent space dimension is 100 and the noise is sampled from a multidimensional normal distribution . For the CelebA dataset we resize the input images to . The code to reproduce all our experiments is available at: coming soon!