Deep generative models (Goodfellow et al., 2014; Kingma & Welling, 2014; Rezende et al., 2014) model the data distribution of observations through corresponding latent variables and a stochastic generator function as
Using reasonably low-dimensional latent variables and highly flexible generator functions allows these models to efficiently represent a useful distribution over the underlying data manifold. These approaches have recently attracted a lot of attention, as deep neural networks are suitable generators which lead to the impressive performance of currentvariational autoencoders (VAEs) (Kingma & Welling, 2014) and generative adversarial networks (GANs) (Goodfellow et al., 2014).
Consider the left panel of Fig. 1, which shows the latent representations of digits 0 and 1 from MNIST under a VAE. Three latent points are highlighted: one point (A) far away from the class boundary, and two points (B, C) near the boundary, but on opposite sides. Points B and C near the boundary seem to be very close to each other, while the third is far away from the others. Intuitively, we would hope that points from the same class (A and B) are closer to each other than to members of other classes (C), but this is seemingly not the case. In this paper, we argue this seemed conclusion is incorrect and only due to a misinterpretation of the latent space — in fact points A and B are closer to each other than to C in the latent representation. Correcting this misinterpretation not only improves our understanding of generative models, but also improves interpolations, clusterings, latent probability distributions, sampling algorithms, interpretability and more.
In general, latent space distances lack physical units (making them difficult to interpret) and are sensitive to specifics of the underlying neural nets. It is therefore more robust to consider infinitesimal distances along the data manifold in the input space. Let be a latent point and let and be infinitesimals, then we can compute the squared distance
using Taylor’s Theorem. This implies that the natural distance function in changes locally as it is governed by the local Jacobian. Mathematically, the latent space should not then be seen as a linear Euclidean space, but rather as a curved space. The right panel of Fig. 1 provides an example of the implications of this curvature. The figure shows synthetic data from two classes, and the corresponding latent representation of the data. The background color of the latent space corresponds to , which can be seen as a measure of the local distortion of the latent space. We interpolate two points from the same class by walking along the connecting straight line (red); in the right panel, we show points along this straight line which have been mapped by the generator to the input space. Since the generator defines a surface in the input space, we can alternatively seek the shortest curve along this surface that connects the two points; this is perhaps the most natural choice of interpolant. We show this shortest curve in green. From the center panel it is evident that the natural interpolant is rather different from the straight line. This is due to the distortion of the latent space, which is the topic of the present paper.
In Sec. 2 we briefly present the VAE as a representative instance of generative models. In Sec. 3 we connect generative models with their underlying geometry, and in Sec. 4 we argue that a stochastic Riemannian metric is naturally induced in the latent space by the generator. This metric enables us to compute length-minimizing curves and corresponding distances. This analysis, however, reveals that the traditional variance approximations in VAEs are rather poor and misleading; we propose a solution in Sec. 4.1. In Sec. 5 we demonstrate how the resulting view of the latent space improves latent interpolations, gives rise to more meaningful latent distributions, clusterings and more. We discuss related work in Sec. 6 and conclude the paper with an outlook in Sec. 7.
2 The Variational Autoencoders acting as the Generator
The variational autoencoder (VAE) proposed by Kingma & Welling (2014) is a simple yet powerful generative model which consists of two parts: (1) an inference network or recognition network (encoder) learns the latent representation (codes) of the data in the input space ; and (2) the generator (decoder) learns how to reconstruct the data from these latent space codes in .
Formally, a prior distribution is defined for the latent representations , and there exists a mapping function that generates a surface in . Moreover, we assume that another function captures the error (or uncertainty) between the actual data observation and its reconstruction as , where and is the Hadamard (element-wise) product. Then the likelihood is naturally defined as . The flexible functions and are usually deep neural networks with parameters .
However, the corresponding posterior distribution is unknown, as the marginal likelihood is intractable. Hence, the posterior is approximated using a variational distribution , where the functions and are again deep neural networks with parameters
. Since the generator (decoder) is a composition of linear maps and activation functions, its smoothness is based solely on the chosen activation functions.
The optimal parameters and are found by maximizing the evidence lower bound (ELBO) of the marginal likelihood as
where the bound follows from Jensen’s inequality. The optimization is based on variations of gradient descent using the reparametrization trick (Kingma & Welling, 2014; Rezende et al., 2014). Further improvements have been proposed that provide more flexible posterior approximations (Rezende & Mohamed, 2015; Kingma et al., 2016) or tighter lower bound (Burda et al., 2016). In this paper, we consider the standard VAE for simplicity. The optimization problem in Eq. 3 is difficult since poor reconstructions by can be explained by increasing the corresponding variance . A common trick, which we also follow, is to optimize while keeping constant, and then finally optimize for the variance .
3 Surfaces as the Foundation of Generative Models
Mathematically, a deterministic generative model can be seen as a surface model (Gauss, 1827) if the generator is sufficiently smooth. Here, we briefly review the basic concepts on surfaces, as they form the mathematical foundation of this work.
Intuitively, a surface is a smoothly-connected set of points embedded in . When we want to make computations on a surface, it is often convenient to parametrize the surface by a low-dimensional (latent) variable along with an appropriate function . We let denote the intrinsic dimensionality of the surface, while is the dimensionality of the input space. If we consider a smooth (latent) curve , then it has length , where denotes the velocity of the curve. In practice, the low-dimensional parametrization often lacks a principled meaningful metric, so we measure lengths in input space by mapping the curve through ,
where the last step follows from Taylor’s Theorem. This implies that the length of a curve along the surface can be computed directly in the latent space using the (locally defined) norm
Here, is a symmetric positive definite matrix, which acts akin to a local Mahalanobis distance measure. This gives rise to the definition of a Riemannian metric, which represents a smoothly changing inner product structure.
A Riemannian metric is a smooth function that assigns a symmetric positive definite matrix to any point in .
It should be clear that if the generator function is sufficiently smooth, then in Eq. 5 is a Riemannian metric.
When defining distances across a given surface, it is meaningful to seek the shortest curve connecting two points. Then a distance can be defined as the length of this curve. The shortest curve connecting points and is by (trivial) definition
A classic result of differential geometry (do Carmo, 1992)
is that solutions to this optimization problem satisfy the following system of ordinary differential equations (ODEs)
stacks the columns of a matrix into a vector andis the Kronecker product. For completeness, we provide a derivation of this result in Appendix A. Shortest curves can then be computed by solving the ODEs numerically; our implementation uses bvp5c from Matlab.
4 The Geometry of Stochastic Generators
In the previous section, we considered deterministic generators to provide relevant background information. We now extend these results to the stochastic case; in particular we consider
This is the generator driving VAEs and related models. For our purposes, we will call the mean function and the variance function.
Following the discussion from the previous section, it is natural to consider the Riemannian metric in the latent space. Since the generator is now stochastic, this metric also becomes stochastic, which complicates analysis. The following results, however, simplify matters.
If the stochastic generator in Eq. 8 has mean and variance functions that are at least twice differentiable, then the expected metric equals
where and are the Jacobian matrices of and .
Proof. See Appendix B.
Theorem 2 (Due to Tosi et al. (2014)).
The variance of the metric under the measure vanishes when the data dimension goes to infinity, i.e. .
Theorem 2 suggests that the (deterministic) expected metric is a good approximation to the underlying stochastic metric when the data dimension is large. We make this approximation, which allows us to apply the theory of deterministic generators.
This expected metric has a particularly appealing form, where the two terms capture the distortion of the mean and the variance functions respectively. In particular, the variance term will be large in regions of the latent space, where the generator has large variance. This implies that induced distances will be large in regions of the latent space where the generator is highly uncertain, such that shortest paths will tend to avoid these regions. These paths will then tend to follow the data in the latent space, c.f. Fig. 3. It is worth stressing, that no learning is needed to compute this metric: it only consists of terms that can be derived directly from the generator.
4.1 Ensuring Proper Geometry Through Meaningful Variance Functions
Theorem 1 informs us about how the geometry of the generative model depends on both the mean and the variance of the generator. Assuming successful training of the generator, we can expect to have good estimates of the geometry in regions near the data. But what happens in regions further away from the data? In general, the mean function cannot be expected to give useful extrapolations to such regions, so it is reasonable to require that the generator has high variance in regions that are not near the data. In practice, the neural net used to represent the variance function is only trained in regions where data is available, which implies that variance estimates are extrapolated to regions with no data. As neural nets tend to extrapolate poorly, practical variance estimates tend to be arbitrarily poor in regions without data.
, the standard deviationfor the standard variance network, and the proposed solution.
illustrates this problem. The first two panels show the data and its corresponding latent representations (here both input and latent dimensions are 2 to ease illustration). The third panel shows the variance function under a standard architecture, deep multilayer perceptron withsoftplus nonlinearity for the output layer. It is evident that variance estimates in regions without data are not representative of either uncertainty or error of the generative process; sometimes variance is high, sometimes it is low. From a probabilistic modeling point-of-view, this is disheartening. An informal survey of publicly available VAE implementations also reveals that it is common to enforce a constant unit variance everywhere; this is further disheartening.
For our purposes, we need well-behaved variance functions to ensure a well-behaved geometry, but reasonable variance estimates are of general use. Here, as a general strategy, we propose to model the inverse variance with a network that extrapolates towards zero. This at least ensures that variances are large in regions without data. Specifically, we model the precision as , where all operations are element-wise. Then, we model this precision with a radial basis function (RBF) neural network (Que & Belkin, 2016). Formally this is written
where are all parameters, are the positive weights of the network (positivity ensures a positive precision), and are the centers and the bandwidth of the radial basis functions, and is a vector of positive constants to prevent division by zero. It is easy to see that with this approach the variance of the generator increases with the distance to the centers. The right-most panel of Fig. 4 shows an estimated variance function, which indeed has the desired property that variance is large outside the data support. Further, note the increased variance between the two clusters, which captures that even interpolating between clusters comes with a level of uncertainty. In Appendix C we also demonstrate that this simple variance model improves the marginal likelihood on held-out data.
Training the variance network amounts to fitting the RBF network. Assuming we have already trained the inference network (Sec. 2), we can encode the training data, and use -means to estimate the RBF centers. Then, an estimate for the bandwidths of each kernel can be computed as
where the hyper-parameter controls the curvature of the Riemannian metric, i.e. how fast it changes based on the uncertainty. Since the mean function of the generator is already trained, the weights of the RBF can be found using projected gradient descent to ensure positive weights.
One visualization of the distortion of the latent space relative to the input space is the geometric volume measure , which captures the volume of an infinitesimal area in the input space. Figure 5 shows this volume measure for both standard variance functions as well as our proposed RBF model. We see that the proposed model captures the trend of the data, unlike the standard model.
5 Empirical Results
We demonstrate the usefulness of the geometric view of the latent space with several experiments. Model and implementation details can be found in Appendix D. In all experiments we first train a VAE and then use the induced Riemannian metric.
5.1 Meaningful Distances
First we seek to quantify if the induced Riemannian distance in the latent space is more useful than the usual Euclidean distance. For this we perform basic -means clustering under the two metrics. We construct 3 sets of MNIST digits, using 1000 random samples for each digit. We train a VAE for each set, and then subdivide each into 10 sub-sets, and performed -means clustering under both distances. One example result is shown in Fig. 6. Here it is evident that, since the latent points roughly follow a unit Gaussian, there is little structure to be discovered by the Euclidean -means, and consequently it performs poorly. The Riemannian clustering is remarkably accurate. Summary statistics across all subsets are provided in Table 1, which shows the established -measure for clustering accuracy. Again, the Riemannian metric significantly improves clustering. This implies that the underlying Riemannian distance is more useful than its Euclidean counterpart.
Next, we investigate whether the Riemannian metric gives more meaningful interpolations. First, we train a VAE for the digits 0 and 1 from MNIST. The upper left panel of Fig. 7 shows the latent space with the Riemannian measure as background color, together with two interpolations. Images generated by both Riemannian and Euclidean interpolations are shown in the bottom of Fig. 7. The Euclidean interpolations seem to have a very abrupt change when transitioning from one class to another. The Riemannian interpolant gives smoother changes in the generated images. The top-right panel of the figure shows the auto-correlation of images along the interpolants; again we see a very abrupt change in the Euclidean interpolant, while the Riemannian is significantly smoother. We also train a convolutional VAE on frames from a video. Figure 8 shows the corresponding latent space and some sample interpolations. As before, we see more smooth changes in generated images when we take the Riemannian metric into account.
5.3 Latent Probability Distributions
We have seen strong indications that the Riemannian metric gives a
more meaningful view of the latent space, which may also
improve probability distributions in the latent space.
A relevant candidate distribution is the locally adaptive normal
locally adaptive normal distribution (LAND)(Arvanitidis et al., 2016)
where is the Riemannian extension of Mahalanobis distance. We fit a mixture of two LANDs to the MNIST data from Sec. 5.2 alongside a mixture of Euclidean normal distributions. The first column of Fig. 9 shows the density functions of the two mixture models. Only the Riemannian model reveals the underlying clusters. We then sample 40 points from each component of these generative models111We do not follow common practice and sort samples by their likelihood, as this hides low-quality samples. (center column of the figure). We see that the Riemannian model generates high-quality samples, whereas the Euclidean model generates several samples in regions where the generator is not trained and therefore produces blurry images. Finally, the right column of Fig. 9 shows all pairwise distances between the latent points under both Riemannian and Euclidean distances. Again, we see that the geometric view clearly reveals the underlying clusters.
5.4 Random Walk on the Data Manifold
Finally, we consider random walks over the data manifold, which is a common tool for exploring latent spaces. To avoid the walk drifting outside the data support, practical implementations artificially restrict the walk to stay inside the hypercube. Here, we consider unrestricted Brownian motion under both the Euclidean and Riemannian metric. We perform this random walk in the latent space of the convolutional VAE from Sec. 5.2. Figure 10 shows example walks, while Fig. 11 shows generated images (video here). While the Euclidean random walk moves freely, the Riemannian walk stays within the support of the data. This is explained in the left panel of Fig. 10, which shows that the variance term in the Riemannian metric creates a “wall” around the data, which the random walk will only rarely cross. These “walls” also force shortest paths to follow the data.
6 Related Work
This unsupervised learning category attracted a lot of attention, especially, due to the advances on the deep neural networks. We have considered VAEs(Kingma & Welling, 2014; Rezende et al., 2014), but the ideas extend to similar related models. These include extensions that provide more flexible approximate posteriors (Rezende & Mohamed, 2015; Kingma et al., 2016). GANs (Goodfellow et al., 2014) also fall in this category, as these models have an explicit generator. While the inference network is not a necessary component in the GAN model, it has been shown that incorporating it improves overall performance (Donahue et al., 2017; Dumoulin et al., 2017). The same thoughts hold for approaches that transform the latent space through a sequence of bijective functions (Dinh et al., 2017)
Geometry in neural networks.
Bengio et al. (2013) discuss the importance of geometry in neural networks as a tool to understand local generalization. For instance, the Jacobian matrix is a measure of smoothness for a function that interpolates a surface to the given data. This is exactly the implication in (Rifai et al., 2011), where the norm of the Jacobian acts as a regularizer for the deterministic autoencoder. Recently, Kumar et al. (2017)
used the Jacobian to inject invariances in a classifier.
Like the present paper, Tosi et al. (2014) derive a suitable Riemannian metric in Gaussian process (GP) latent variable models (Lawrence, 2005), but the computational complexity of GPs causes practical concerns. Unlike works that explicitly learn a Riemannian metric (Hauberg et al., 2012; Peltonen et al., 2004), our metric is fully derived from the generator and requires no extra learning once the generator is available.
7 Discussion and Further Extensions
The geometric interpretation of representation learning is that the latent space is a compressed and flattened version of the data manifold. We show that the actual geometry of the data manifold can be more complex than it first appears.
Here we have initiated the study of proper geometries for generative models. We showed that the latent space not only provides a low-dimensional representation of the data manifold, but at the same time, can reveal the underlying geometrical structure. We proposed a new variance network for the generator, which provides meaningful uncertainty estimates while regularizing the geometry. The new detailed understanding of the geometry provides us with more relevant distance measures, as demonstrated by the fact that a -means clustering, on these distances, is better aligned with the ground truth label structure than a clustering based on conventional Euclidean distances. We also found that the new distance measure produces smoother interpolation, and when training Riemannian “LAND” mixture models based on the new geometry, the components aligned much better with the ground truth group structure. Finally, inspired by the recent interest in sequence generation by random walks in latent space, we found that geometrically informed random walks stayed on the manifold for much longer runs than sequences based on Euclidean random walks.
The presented analysis easily extends to sophisticated generative models, where the latent space will be potentially endowed with more flexible nonlinear structures. This directly implies particularly interesting geometrical models. An obvious question is: can the geometry of the latent space play a role while we learn the generative model? Either way, we believe that this geometric perspective provides a new way of thinking and further interpreting the generative models, while at the same time it encourages development of new nonlinear models in the representation space.
LKH is supported by Innovation Fund Denmark / the Danish Center for Big Data Analytics Driven Innovation. SH was supported by a research grant (15334) from VILLUM FONDEN. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no 757360). We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the used Titan Xp GPU.
- Arvanitidis et al. (2016) Georgios Arvanitidis, Lars Kai Hansen, and Søren Hauberg. A Locally Adaptive Normal Distribution. In Advances in Neural Information Processing Systems (NIPS), 2016.
- Bengio et al. (2013) Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798–1828, August 2013.
- Burda et al. (2016) Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance Weighted Autoencoders. In International Conference on Learning Representations (ICLR), 2016.
- Dinh et al. (2017) Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. In International Conference on Learning Representations (ICLR), 2017.
- do Carmo (1992) M.P. do Carmo. Riemannian Geometry. Mathematics (Boston, Mass.). Birkhäuser, 1992.
- Donahue et al. (2017) Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial Feature Learning. In International Conference on Learning Representations (ICLR), 2017.
- Dumoulin et al. (2017) Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially Learned Inference. In International Conference on Learning Representations (ICLR), 2017.
- Gauss (1827) Carl Friedrich Gauss. Disquisitiones generales circa superficies curvas. Commentationes Societatis Regiae Scientiarum Gottingesis Recentiores, VI:99–146, 1827.
- Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In Advances in Neural Information Processing Systems (NIPS), 2014.
- Hauberg et al. (2012) Søren Hauberg, Oren Freifeld, and Michael J. Black. A Geometric Take on Metric Learning. In Advances in Neural Information Processing Systems (NIPS) 25, pp. 2033–2041, 2012.
- Kingma & Welling (2014) Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2014.
- Kingma et al. (2016) Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved Variational Inference with Inverse Autoregressive Flow. In Advances in Neural Information Processing Systems (NIPS), 2016.
Kumar et al. (2017)
Abhishek Kumar, Prasanna Sattigeri, and Tom Fletcher.
Improved Semi-supervised Learning with Gans using Manifold Invariances.In Advances in Neural Information Processing Systems (NIPS). 2017.
Probabilistic non-linear principal component analysis with Gaussian process latent variable models.
Journal of machine learning research, 6(Nov):1783–1816, 2005.
- Peltonen et al. (2004) Jaakko. Peltonen, Arto Klami, and Samuel Kaski. Improved learning of riemannian metrics for exploratory analysis. Neural Networks, 17(8):1087–1100, 2004.
Que & Belkin (2016)
Qichao Que and Mikhail Belkin.
Back to the future: Radial basis function networks revisited.In Artificial Intelligence and Statistics (AISTATS), 2016.
- Rezende & Mohamed (2015) Danilo Rezende and Shakir Mohamed. Variational Inference with Normalizing Flows. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015.
Rezende et al. (2014)
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra.
Stochastic Backpropagation and Approximate Inference in Deep Generative Models.In Proceedings of the 31st International Conference on Machine Learning (ICML), 2014.
Rifai et al. (2011)
Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio.
Contractive Auto-Encoders: Explicit Invariance During Feature Extraction.In Proceedings of the 28th International Conference on Machine Learning (ICML), 2011.
- Tosi et al. (2014) Alessandra Tosi, Søren Hauberg, Alfredo Vellido, and Neil D. Lawrence. Metrics for Probabilistic Geometries. In The Conference on Uncertainty in Artificial Intelligence (UAI), July 2014.
Appendix A The Derivation of the Geodesic Differential Equation
The shortest path between two points on a Riemannian manifold is found by optimizing the functional
where and . The minima of this problem can be found instead by optimizing the curve energy (do Carmo, 1992), so the functional becomes
The inner product can be written explicitly as
where the index in the parenthesis represents the corresponding element in the vector or matrix. In the derivation the is the usual Kronecker product and the stacks the column of a matrix into a vector. We find the minimizers by the Euler-Lagrange equation
Since the term
we can write the right hand side of the Eq. 16 as
The left hand side term of the Eq. 16 is equal to
The final system of order ordinary differential equations is
Appendix B The Derivation of the Riemannian Metric
The proof of Theorem 1.
As we introduced in Eq. 8 the stochastic generator is
Thus, we can compute the corresponding Jacobian as follows
and the resulting “random” metric in the latent space is
. The randomness is due to the random variable, and thus, we can compute the expectation
Using the linearity of expectation we get that
because . The other term
The matrix and for the variance network
it is easy to see that . So the expectation of the induced Riemannian metric in the latent space by the generator is
which concludes the proof. ∎
Appendix C Influence of Variance on the Marginal Likelihood
We trained a VAE on the digits 0 and 1 of the MNIST scaled to . We randomly split the data to training and test data, ensuring balanced classes. First, we only trained the encoder and the mean function of the decoder. Then, keeping these fixed, we trained two variance functions: one based on standard deep neural network architecture, and the other using our proposed RBF model. Clearly, we have two generators with the same mean function, but different variance functions. Below we present the architectures for the standard neural networks. For the RBF model we used 32 centers and .
|Encoder/Decoder||Layer 1||Layer 2||Layer 3|
|64, (softplus)||32, (softplus)||, (linear)|
|64, (softplus)||32, (softplus)||, (softplus)|
|32, (softplus)||64, (softplus)||, (tanh)|
|32, (softplus)||64, (softplus)||, (softplus)|
The numbers corresponds to the layer size together with the activation function in parenthesis. Further, the mean and the variance functions share the weights of the first layer. The input space dimension is . Then, we computed the marginal likelihood of the test data using Monte Carlo as:
using samples. The generator with the standard variance function achieved -68.25 mean log-marginal likelihood, while our proposed model -50.34, where the higher the better.
The reason why the proposed RBF model performs better can be easily analyzed. The marginal likelihood under the Monte Carlo estimation is, essentially, a large Gaussian mixture model with equal weights. Each mixture component is defined by the generator through the likelihood . Considering the variance term, the standard neural network approach is trained on the given data points and the corresponding latent codes. Unfortunately, its behavior is arbitrary in regions where there are not any encoded data. On the other hand our proposed model assigns large variance to these regions, while on the regions where we have latent codes its behavior will be approximately the same with the standard neural network. This implies that the resulting marginal likelihood for the two models are highly similar in regions of high data density, but significantly different elsewhere. The RBF variance model ensures that mixture components in these regions have high variance, whereas the standard architecture assign arbitrary variance. Consequently, the RBF-based assigns minimal density to regions with no data, and, thus, attains higher marginal likelihood elsewhere.
Appendix D Implementation Details for the Experiments
The pixel values of the images are scaled to the interval . We use for the functions multilayer perceptron (MLP) deep neural networks, and for the the proposed RBF model with 64 centers, so and the parameter of Eq. 11 is set to 2. We used regularization with parameter equal to .
|Encoder/Decoder||Layer 1||Layer 2||Layer 3|
|64, (tanh)||32, (tanh)||, (linear)|
|64, (tanh)||32, (tanh)||, (softplus)|
|32, (tanh)||64, (tanh)||, (sigmoid)|
The number corresponds to the size of the layer, and in the parenthesis the activation function. For the encoder, the mean and the variance functions share the weights of the Layer 1. The input space dimension . After the training, the geodesics can be computed by solving Eq. 7 numerically. The LAND mixture model is fitted as explained in (Arvanitidis et al., 2016).
Details for Experiments 5.4.
In this experiment we used Convolutional Variational Auto-Encoders. The pixel values of the images are scaled to the interval . For the we used the proposed RBF model with 64 centers and the parameter of Eq. 11 is set to 2.
Considering the variance network during the decoding stage, the RBF generates an image, which represents intuitively the total variance of each pixel for the decoded final image, but in an initial sub-sampled version. Afterwards, this image is passed through a sequence of deconvolution layers, and at the end will represent the variance of every pixel for each RGB channel. However, it is critical that the weights of the filters must be clipped during the training to to ensure positive variance.
|Encoder||Layer 1 (Conv)||Layer 2 (Conv)||Layer 3 (MLP)||Layer 4 (MLP)|
|32, 3, 2, (tanh)||32, 3, 2, (tanh)||1024, (tanh)||, (linear)|
|32, 3, 2, (tanh)||32, 3, 2, (tanh)||1024, (tanh)||, (softplus)|
For the convolutional and deconvolutional layers, the first number is the number of applied filters, the second is the kernel size, and third is the stride. Also, for the encoder, the mean and the variance functions share the convolutional layers. We usedregularization with parameter equal to .
|Decoder||L. 1 (MLP)||L. 2 (MLP)||L. 3 (DE)||L. 4 (DE)||L. 5 (DE)||L.6 (CO)|
|1024, (t)||, (t)||32, 3, 2, (t)||32, 3, 2, (t)||3, 3, 1, (t)||3, 3, 1, (s)|
For the decoder, the acronyms (DE) = Deconvolution, (CO) = Convolution and , stand for tanh and sigmoid, respectively. Also,
of the images, in our case 64,64,3. For all the convolutions and deconvolutions, the padding is set tosame. We used regularization with parameter equal to .
|Decoder||Layer 1 (RBF)||Layer 2 (Deconv)||Layer 3 (Conv)|
|1, 3, 2 (linear)||3, 3, 1 (linear)|
The Brownian motion over the Riemannian manifold in the latent space is presented in Alg. 2.