1 Introduction
Matrices of pairwise distances serve as the input for a wide range of classic machine learning algorithms, such as knearest neighbour, multidimensional scaling, or stationary kernels. In the case of highdimensional spaces, obtaining a meaningful distance is challenging for two reasons. First, choosing a metric, e.g. an instance of Minkowski distance, comes with certain assumptions on the data—e.g., distances are invariant to rotation under the L2 norm. Second, these distances become increasingly meaningless for higher dimensions
(Aggarwal et al., 2001). Numerous researchers have proposed to learn distances from data, sometimes referred to as metric learning (Xing et al., 2003; Weinberger et al., 2006; Davis et al., 2007; Kulis et al., 2013). For data distributed according to the multivariate normal, the Mahalanobis distance is a popular choice, making the distance measure effectively invariant to translation and scaling. The idea of a linear transformation of the data has been extended to also reflect supervised side information such as class labels
(Weinberger and Saul, 2009; Goldberger et al., 2004). Further work has pushed this to use nonlinear (Salakhutdinov and Hinton, 2007) and recurrent (Bayer et al., 2012) transformations.Probabilistic modelling of highdimensional data has progressed enormously. Two distinct “neural sampling” approaches are those of generative adversarial networks (GANs)
(Goodfellow et al., 2014) and variational autoencoders (VAEs) (Kingma and Welling, 2013; Rezende et al., 2014).This work aims to bring a set of techniques to neural sampling that makes them powerful tools for metric learning. Pioneering work on interpolation and generation between two given points on a Riemannian manifold includes (Noakes et al., 1989) and (Crouch and Leite, 1995). In addition, Principal Geodesic Analysis (PGA) (Fletcher et al., 2004)
describing the variability of data on a manifold uses geodesics in Principal component analysis. Recent work of
Tosi et al. (2014) proposed to perceive the latent space of Gaussian process latent variable models (GPLVMs) as a Riemannian manifold, where the distance between two data points is given as the shortest path along the data manifold.We transfer the idea of (Tosi et al., 2014) to neural samplers. We show how to represent such shortest paths in a parameterized fashion, where the calculation of the distance between two data points is effectively a minimization of the length of a curve. The method is evaluated on a range of highdimensional datasets. Further, we provide evidence of the manifold hypothesis (Rifai et al., 2011). We would also like to mention Arvanitidis et al. (2017) and Shao et al. (2017) who independently worked on this topic at the same time as us.
In robotic domains, our approach can be applied to path planning based on the learned skills. The demonstrations from experts enable robots to generate particular motions. Simultaneously, robots require natural motion exploration (Havoutis and Ramamoorthy, 2013). In our method, a Riemannian metric is used to achieve such optimal motion paths along the data manifolds.
2 Using Riemannian geometry in generative latent variable models
Latent variable models are commonly defined as
(1) 
where latent variables are used to explain the data .
Assume we want to obtain a distance measure from the learned manifolds, which adequately reflects the “similarity” between data points and . If we can infer the corresponding latent variables and , an obvious choice is the Euclidean distance in latent space, . This has the implicit assumption that moving a certain distance in latent spaces moves us proportionally far in observation space, . But this is a fallacy: for latent variables to adequately model the data, stark discontinuities in the likelihood are virtually always present. To see this, we note that the prior can be expressed as the posterior aggregated over the data:
A direct consequence of the discontinuities is that there are no regions of low density in the latent space. Hence, separated manifolds in the observation space (the set of points from different classes) may be placed directly next to each other in latent space—a property that can only be compensated through rapid changes in the likelihood at the respective “borders”.
For the estimation of nonlinear latent variable models we use importanceweighted autoencoders (IWAE)
(Burda et al., 2015) (see Section 2.1). Treating the latent space as a Riemannian manifold (see Section 2.2) provides tools to define distances between data points by taking into account changes in the likelihood.2.1 Importanceweighted autoencoder
Inference and learning in models of the form given by Eq. (1)—based on the maximumlikelihood principle—are intractable because of the marginalization over the latent variables. Typically, approximations are used which are either based on sampling or on variational inference. In the latter case, the intractable posterior is approximated by a distribution . The problem of inference is then substituted by one of optimization, namely the maximization of the evidence lower bound (ELBO). Let be observable data and the corresponding latent variables. Further, let be a likelihood function parameterized by . Then
(2)  
If we implement
with a neural network parameterized by
, we obtain the variational autoencoder of Kingma and Welling (2013), which jointly optimizes with respect to and .Since the inference and generative models are tightly coupled, an inflexible variational posterior has a direct impact on the generative model, causing both models to underuse their capacity.
In order to learn richer latent representations and achieve better generative performance, the importanceweighted autoencoder (IWAE) (Burda et al., 2015; Cremer et al., 2017) has been introduced. It treats as a proposal distribution and obtains a tighter lower bound using importance sampling:
(3)  
where are the importance weights:
(4) 
The IWAE is the basis of our approach, since it can yield an accurate generative model.
2.2 Riemannian geometry
A Riemannian manifold is a differentiable manifold
with a metric tensor
. It assigns to each point an inner product on the tangent space , where the inner product is defined as:(5) 
with and .
Consider a curve in the Riemannian manifold, transformed by a continuous function to an dimensional observation space, where . The length of the curve in the observation space is defined as
3 Approximating the geodesic
In this work, we are primarily interested in lengthminimizing curves between samples of generative models. In Riemannian geometry, locally lengthminimizing curves are referred to as geodesics. We treat the latent space of generative models as a Riemannian manifold. This allows us to parametrize the curve in the latent space, while distances are measured by taking into account distortions from the generative model.
We use a neural network to approximate the curve in the latent space, where are the weights and biases.
The function from Eqs. (2.2
) corresponds to the mean of the generative model’s probability distribution
and the components of the Jacobian are(8) 
and denote the th and th element of the generated data points and latent variables , respectively, with and .
We approximate the integral of Eq. (7) with equidistantly spaced sampling points of :
(9)  
(10) 
The term inside the summation can be interpreted as the rate of change at point , induced by the generative model, and we will refer to it as velocity:
(11) 
An approximation of the geodesic between two points in the latent space is obtained by minimizing the length in Eq. (10), where the weights and biases of the neural network are subject to optimization.
With the start and end points of the curve in the latent space given as and , we consider the following constrained optimization problem:
(12) 
3.1 Dealing with boundary constraints
3.2 Smoothing the metric tensor
To ensure the geodesic is following the data manifold—which entails that the manifold distance is smaller than the Euclidean distance—a penalization term is added to smooth the metric tensor
. It leads to the following loss function:
(15) 
where acts as a regularization coefficient. This optimization step is implemented as a postprocessing of Eq. (3
) via singularvalue decomposition (SVD)
(16) 
where the columns of
are the eigenvectors of the covariance matrix
, and the columns of V are the eigenvectors of . The diagonal entries incontain singular values with scaling information about how a vector is stretched or shrunk when it is transformed from the column space of
to the row space of .Minimizing the term is equivalent to a lowrank reconstruction for
(17) 
where is a predefined lower rank of . is predefined which is equal or slightly smaller than the full rank of the metric tensor. rescales the singular values of nonlinearly, which allows making the smaller singular values much smaller than the leading singular values. The smoothing therefore weakens the reconstructed offdiagonal values of which correspondingly reduces the manifold distance dramatically compared to the Euclidean distance. The smoothing effect is that a higher augments the difference between the Euclidean interpolation and the path along the manifold—and will be demonstrated experimentally in Section 4.3.
4 Experiments
We evaluate our approach by conducting a series of experiments on three different datasets—an artificial pendulum dataset, the binarized MNIST digit dataset
(Larochelle and Murray, 2011), a simulated robot arm dataset and the human motion dataset^{1}^{1}1http://mocap.cs.cmu.edu/.Our goal is to enable smooth interpolations between the reconstructed images of an importanceweighted autoencoder and to differentiate between classes within the latent space. To show that the paths of geodesics can differ from Euclidean interpolations, the following experiments mainly focus on comparing geodesics with the Euclidean interpolations as well as the reconstructed data generated from points along their paths.
4.1 Training
In all experiments, we chose a Gaussian prior
. The inference model and the likelihood are represented by random variables of which the parameters are functions of the respective conditions.
For the inference model we consistently used a diagonal Gaussian, i.e. . Depending on the experiments, the likelihood either represents a Bernoulli variable or a Gaussian . is a global variable and the parameters are functions of the latent variables represented by neural networks parameterized by and respectively.
The hyperparameters of
are summarized in Table 1. We used sigmoid, tanh and softplus activation functions in the generative model (see App.
B and C). See App. A for further details of the training procedure.4.2 Visualization
There are several approaches to visualize the properties of the metric tensor, including Tissot’s indicatrix. We use the magnification factor to visualize metric tensors during the evaluation, when we have two latent dimensions. The magnification factor (Bishop et al., 1997) is defined as
(18) 
To get an intuitive understanding of the magnification factor, it is helpful to consider the rule for changing variables . This rule shows the relation between infinitesimal volumes of different equidimensional Euclidean spaces. The same rule can be applied to express the relationship between infinitesimal volumes of a Euclidean space and a Riemannian manifold—with the difference of using the instead of . Hence, the magnification factor visualizes the extent of change of the infinitesimal volume by mapping a point from the Riemannian manifold to the Euclidean space (Gemici et al., 2016).
4.3 Pendulum
We created an artificial dataset of
pixel images of a pendulum with a joint angle as the only degree of freedom and augmented it by adding 0.05 perpixel Gaussian noise. We generated 15,000 pendulum images, with joint angles uniformly distributed in the range [0, 360). The architecture of the IWAE can be found in Table
2.Fig. 2 illustrates the trained twodimensional latent space of the IWAE. The grayscale in the background is proportional to the magnification factor, whereas the rotation angles of the pendulum are encoded by colors. The comparison of the geodesic (see Fig. 1, top row) with the Euclidean interpolation (Fig. 1, middle row) shows a much more uniform rotation of the pendulum for reconstructed images of points along the geodesic.
For this dataset, an SVD regularization with large values of was necessary for the optimization, to yield a path along the data manifold. Fig. 3 illustrates the influence of on the distance metric. It is a property of this dataset, due to the generative distribution, that small values of lead to shorter distances of the Euclidean interpolation than of paths along the data manifold. Fig. 4 shows the interpolations of 100 pairs of samples. We compared the geodesic with the Euclidean interpolation and an interpolation along the data manifold. The samples are randomly chosen with the condition to have a difference in the rotation angle of (0, 180] degrees. The distances of the geodesics and the paths along the data manifold are linearly correlated to the angles between two points in the observation space and fit to each other.
4.4 Mnist
To evaluate our model on a benchmark dataset, we used a fixed binarized version of the MNIST digit dataset defined by Larochelle and Murray (2011). It consists of 50,000 training and 10,000 test images of handwritten digits (0 to 9) which are pixels in size. The architecture of the IWAE is summarized in Table 3.
Fig. 6 shows the trained twodimensional latent space of the IWAE. Distances between the selected data point and any point on the equidistance line are equal in the observation space. The courses of the equidistance lines demonstrate that treating the latent space as a Riemannian manifold enables to separate classes, since the geodesic between similar data points is shorter than between dissimilar ones. This is especially useful for state of the art methods that lead to very tight boundaries, like in this case—data points of different MNIST classes are almost not separable in the latent space by their Euclidean distance. Hence, the Euclidean distance cannot reflect the true similarity of two data points.
The difference between the geodesic and the Euclidean interpolation is shown in Fig. 7. The Euclidean interpolation crosses four classes, the geodesic just two. Compared to the geodesic, the Euclidean interpolation leads to less smooth transitions in the reconstructions (see Fig. 5, top and middle row). The transition between different classes is visualized by a higher velocity in this area (see Fig. 5, bottom row).
4.5 Robot arm
We simulated a movement of a KUKA robot that has six degrees of freedom (DOF). The end effector moved a circle with a 0.4 meter radius, generating a dataset with 6284 time steps. At each time step, the joint angles were obtained by inverse kinematics. The input data consisted of sixdimensional joint angles. Gaussian noise with a standard deviation of 0.03 was added to the data. The validation dataset also included a complete movement of a circle but only with 150 time steps. The architecture of the IWAE is shown in Table
4.The geodesic interpolation outperforms the Euclidean interpolation for the robot arm movement, which is demonstrated in Fig. 8, 9 and 10. For intuitive observations, the results are shown in a twodimensional end effector Cartesian space using forward kinematics (see Fig. 9).
To efficiently plan motions, in prior works (Berenson et al., 2009) constraints were created in the task space (e.g., constraint on the endeffector to move in a 2D instead of 3d). However, our method does not explicitly require these constraints.
The approach can be applied to movements with higherdimensional joint angles like in case of the fullbody humanoids demonstrated in Section 4.6.
4.6 Human motion
The CMU Graphics Lab Motion Capture Database consists of a large number of human motion recordings, which were recorded by using a Vicon motion capture system. Human subjects wear 41 markers while walking, dancing, etc. The data consists of 62dimensional feature vectors, rendered using Vicon Bodybuilder. We preprocess the 62dimensional data to 50dimensional vectors as described in (Chen et al., 2015). To evaluate the metric on this dataset, we used the walking movements (viz. trial 1 to 16) of subject 35, since it is very stable and widely used for algorithm evaluation, e.g. (Schölkopf et al., 2007; Bitzer et al., 2008; Chen et al., 2016). The total of 6616 frames in the dataset were augmented with Gaussian noise with a standard deviation of 0.03, resulting in four times the size of the original dataset. The noises smooth the latent space which is observed through the magnification factor and the interpolation reconstructions. The architecture of the IWAE ca be found in Table 5.
Fig. 11 and 12 show the geodesic and the Euclidean interpolation^{2}^{2}2https://github.com/lawrennd/mocap is used to visualize the movement in the observation space.. The geodesic follows the path along the data manifold and generates a natural and smooth walking movement. In contrast, the Euclidean interpolation traverses two high areas which cause large jumps of the movement, while the body poses hardly change in other areas.
5 Conclusion and future work
The distance between points in the latent space in general does not reflect the true similarity of corresponding points in the observation space. We gave insight into these issues using techniques from Riemannian geometry, applied to probabilistic latent variable models using neural sampling.
In our approach, the Riemannian distance metric has been successfully applied as an alternative metric that takes into account the underlying manifold. In order to produce shorter distances along the manifold compared to the Euclidean distance, we applied SVD to the metric tensor. As a secondary effect, the metric can be used for smoother interpolations in the latent space.
For twodimensional latent spaces, the serves as a powerful tool to visualize the magnitude of the generative model’s distortion of infinitesimal areas in the latent space.
References
 Aggarwal et al. [2001] C. C. Aggarwal, A. Hinneburg, and D. A. Keim. On the surprising behavior of distance metrics in high dimensional spaces. In The International Conference on Database Theory (ICDT), volume 1, pages 420–434. Springer, 2001.
 Arvanitidis et al. [2017] G. Arvanitidis, L. K. Hansen, and S. Hauberg. Latent space oddity: on the curvature of deep generative models. arXiv, abs/1710.11379, 2017.
 Bayer et al. [2012] J. Bayer, C. Osendorfer, and P. van der Smagt. Learning sequence neighbourhood metrics. In Artificial Neural Networks and Machine Learning (ICANN), pages 531–538, 2012.
 Berenson et al. [2009] D. Berenson, S. S. Srinivasa, D. Ferguson, and J. J. Kuffner. Manipulation planning on constraint manifolds. In International Conference on Robotics and Automation (ICRA), pages 625–632, 2009.

Bishop et al. [1997]
C. M. Bishop, M. Svensén, and C. K. Williams.
Magnification factors for the SOM and GTM algorithms.
In
Proceedings Workshop on SelfOrganizing Maps
, 1997.  Bitzer et al. [2008] S. Bitzer, I. Havoutis, and S. Vijayakumar. Synthesising novel movements through latent space modulatoin of scalable control policies. In International Conference on Simulation of Adaptive Behaviour (SAB), 2008.
 Burda et al. [2015] Y. Burda, R. B. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. CoRR, abs/1509.00519, 2015.
 Chen et al. [2015] N. Chen, J. Bayer, S. Urban, and P. van der Smagt. Efficient movement representation by embedding dynamic movement primitives in deep autoencoders. In International Conference on Humanoid Robots (HUMANOIDS), pages 434–440, 2015.
 Chen et al. [2016] N. Chen, M. Karl, and P. van der Smagt. Dynamic movement primitives in latent space of timedependent variational autoencoders. In International Conference on Humanoid Robots (HUMANOIDS), 2016.
 Cremer et al. [2017] C. Cremer, Q. Morris, and D. Duvenaud. Reinterpreting importanceweighted autoencoders. International Conference on Learning Represenations Workshop Track, 2017.
 Crouch and Leite [1995] P. Crouch and F. S. Leite. The dynamic interpolation problem: on Riemannian manifolds, Lie groups, and symmetric spaces. Journal of Dynamical and control systems, 1(2):177–202, 1995.
 Davis et al. [2007] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Informationtheoretic metric learning. In Proceedings of the 24th international conference on Machine learning (ICML), pages 209–216, 2007.
 De Casteljau [1986] P. d. F. De Casteljau. Shape mathematics and CAD, volume 2. Kogan Page, 1986.
 Fletcher et al. [2004] P. T. Fletcher, C. Lu, S. M. Pizer, and S. Joshi. Principal geodesic analysis for the study of nonlinear statistics of shape. IEEE transactions on medical imaging, 23(8):995–1005, 2004.
 Gemici et al. [2016] M. C. Gemici, D. J. Rezende, and S. Mohamed. Normalizing flows on Riemannian manifolds. CoRR, abs/1611.02304, 2016.
 Goldberger et al. [2004] J. Goldberger, S. T. Roweis, G. E. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In Advances in Neural Information Processing Systems (NIPS), pages 513–520, 2004.
 Goodfellow et al. [2014] I. J. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672–2680, 2014.
 Havoutis and Ramamoorthy [2013] I. Havoutis and S. Ramamoorthy. Motion generation with geodesic paths on learnt skill manifolds. Modeling, Simulation and Optimization of Bipedal Walking, 18:43, 2013.

He et al. [2016]
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778, 2016. 
Karl et al. [2017]
M. Karl, M. Soelch, J. Bayer, and P. van der Smagt.
Deep variational Bayes filters: Unsupervised learning of state space models from raw data.
International Conference on Learning Representations (ICLR), 2017.  Kingma and Ba [2014] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
 Kingma and Welling [2013] D. P. Kingma and M. Welling. Autoencoding variational Bayes. CoRR, abs/1312.6114, 2013.
 Kulis et al. [2013] B. Kulis et al. Metric learning: A survey. Foundations and Trends® in Machine Learning, 5(4):287–364, 2013.

Larochelle and Murray [2011]
H. Larochelle and I. Murray.
The neural autoregressive distribution estimator.
In
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS)
, pages 29–37, 2011.  Noakes et al. [1989] L. Noakes, G. Heinzinger, and B. Paden. Cubic splines on curved spaces. IMA Journal of Mathematical Control and Information, 6(4):465–473, 1989.

Rezende et al. [2014]
D. J. Rezende, S. Mohamed, and D. Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.
In Proceedings of the 31th International Conference on Machine Learning (ICML), pages 1278–1286, 2014. 
Rifai et al. [2011]
S. Rifai, Y. Dauphin, P. Vincent, Y. Bengio, and X. Muller.
The manifold tangent classifier.
In Advances in Neural Information Processing Systems (NIPS), pages 2294–2302, 2011.  Salakhutdinov and Hinton [2007] R. Salakhutdinov and G. E. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure. In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS), pages 412–419, 2007.
 Schölkopf et al. [2007] B. Schölkopf, J. Platt, and T. Hofmann. Modeling human motion using binary latent variables. In Advances in Neural Information Processing Systems (NIPS), pages 1345–1352, 2007.
 Shao et al. [2017] H. Shao, A. Kumar, and P. T. Fletcher. The Riemannian geometry of deep generative models. arXiv, abs/1711.08014, 2017.
 Tosi et al. [2014] A. Tosi, S. Hauberg, A. Vellido, and N. D. Lawrence. Metrics for probabilistic geometries. Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, pages 800–808, 2014.
 Weinberger and Saul [2009] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10:207–244, 2009.
 Weinberger et al. [2006] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. In Advances in neural information processing systems (NIPS), pages 1473–1480, 2006.
 Xing et al. [2003] E. P. Xing, M. I. Jordan, S. J. Russell, and A. Y. Ng. Distance metric learning with application to clustering with sideinformation. In Advances in neural information processing systems (NIPS), pages 521–528, 2003.
Appendix A Details of the training procedure
To avoid local minima with narrow spikes of velocity but low overall length, we validate the result during training based on the maximum velocity of Eq. (11) and the path length , where is a hyperparameter.
We found that training with batch gradient descent and the loss defined in Eq. (10) is prone to local minima. Therefore, we pretrain the neural network on random parametric curves. As random curves we chose Bézier curves [De Casteljau, 1986] of which the control points are obtained as follows: We take , as the centers of a uniform distribution, with its support orthogonal to the straight line between and and the range . For each of those random uniforms, we sample once, to obtain a set of random points . Together with and , these define the control points of the Bézier curve. For each of the random curves, we fit a separate to the points of the curve and select the model with the lowest validation value as the pretrained model. Afterwards, we proceed with the optimization of the loss Eq. (10).
Appendix B Gradients of piecewise linear activation functions
Note that calculating involves calculating the gradients of the Jacobian as well. Therefore optimization with gradientbased methods is not possible when the generative model uses piecewise linear units. This can be illustrated with an example of a neural network with one hidden layer:
(19) 
Both terms in Eq. (19) contain a term that involves twice differentiating a layer with an activation function. In the case of piecewise linear units, the derivative is a constant and hence the second differentiation yields zero.
Appendix C Gradients of sigmoid, tanh and softplus activation functions
We can easily get the Jacobian using sigmoid, tanh and softplus activation functions. Take one layer with the sigmoid or tanh activation function as an example, the Jacobian is written as
(20) 
where is the weights.
With a softplus activation function, the Jacobian is
(21) 
Consequently, the derivative of Jacobian is straightforward.
Appendix D Experiment setups
We used Adam optimizer [Kingma and Ba, 2014] for all experiments. FC in the tables refers to fullyconnected layers. In Table 3
, for the generative modelarchitecture we used an MLP and residual connections
[He et al., 2016]—additionally to the input and output layer. is the number of importanceweighted samples in Eq. (3).architecture  hyperparameters 

Input  learning rate = 
2 tanh FC 150 units  500 sample points 
Output 
recognition model  generative model  hyperparameters 

Input  Input  learning rate = 
2 tanh FC 512 units  2 tanh FC 512 units  = 50 
linear FC output layer for means  softplus FC output layer for means  batch size = 20 
softplus FC output layer for variances 
global variable for variances 
recognition model  generative model  hyperparameters 
Input  Input  learning rate = 
2 tanh FC 512 units  7 residual 128 units.  
linear FC output layer for means  softplus FC output layer for means  batch size = 20 
softplus FC output layer for variances  global variable for variances 
recognition model  generative model  hyperparameters 
Input  Input  learning rate = 
2 tanh FC 512 units  2 tanh FC 512 units  
linear FC output layer for means  softplus FC output layer for means  batch size = 150 
softplus FC output layer for variances  global variable for variances 
recognition model  generative model  hyperparameters 
Input  Input  learning rate = 
3 tanh FC 512 units  3 tanh FC 512 units  
linear FC output layer for means  softplus FC output layer for means  batch size = 150 
softplus FC output layer for variances  global variable for variances 
Comments
There are no comments yet.