Tensorflow implementation of Hyperspherical Variational Auto-Encoders
The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue we propose using a von Mises-Fisher (vMF) distribution instead, leading to a hyperspherical latent space. Through a series of experiments we show how such a hyperspherical VAE, or S-VAE, is more suitable for capturing data with a hyperspherical latent structure, while outperforming a normal, N-VAE, in low dimensions on other data types.READ FULL TEXT VIEW PDF
We claim that a source of severe failures for Variational Auto-Encoders ...
Conventional prior for Variational Auto-Encoder (VAE) is a Gaussian
Variational inference is a fundamental problem in Variational Auto-Encod...
Variational auto-encoders (VAEs) are a powerful approach to unsupervised...
Learning suitable latent representations for observed, high-dimensional ...
We develop a generalised notion of disentanglement in Variational
Using powerful posterior distributions is a popular approach to achievin...
Tensorflow implementation of Hyperspherical Variational Auto-Encoders
First introduced by Kingma and Welling (2013); Rezende et al. (2014), the Variational Auto-Encoder (VAE) is an unsupervised generative model that presents a principled fashion for performing variational inference using an auto-encoding architecture. Applying the non-centered parameterization of the variational posterior (Kingma and Welling, 2014), further simplifies sampling and allows to reduce bias in calculating gradients for training. Although the default choice of a Gaussian prior is mathematically convenient, we can show through a simple example that in some cases it breaks the assumption of an uninformative prior leading to unstable results. Imagine a dataset on the circle , that is subsequently embedded in using a transformation to obtain
. Given two hidden units, an autoencoder quickly discovers the latent circle, while a normal VAE becomes highly unstable. This is to be expected as a Gaussian prior is concentrated around the origin, while the KL-divergence tries to reconcile the differences betweenand . A more detailed discussion of this ‘manifold mismatch’ problem will follow in subsection 2.3.
The fact that some data types like directional data are better explained through spherical representations is long known and well-documented (Mardia, 1975; Fisher et al., 1987), with examples spanning from protein structure, to observed wind directions. Moreover, for many modern problems such as text analysis or image classification, data is often first normalized in a preprocessing step to focus on the directional distribution. Yet, few machine learning methods explicitly account for the intrinsically spherical nature of some data in the modeling process. In this paper, we propose to use the von Mises-Fisher
(vMF) distribution as an alternative to the Gaussian distribution. This replacement leads to a hyperspherical latent space as opposed to a hyperplanar one, where the Uniform distribution on the hypersphere is conveniently recovered as a special case of the vMF. Hence this approach allows for a truly uninformative prior, and has a clear advantage in the case of data with a hyperspherical interpretation. This was previously attempted byHasnat et al. (2017), but crucially they do not learn the concentration parameter around the mean, .
In order to enable training of the concentration parameter, we extend the reparameterization trick for rejection sampling as recently outlined in Naesseth et al. (2017) to allow for additional transformations. We then combine this with the rejection sampling procedure proposed by Ulrich (1984) to efficiently reparameterize the VAE 111 https://github.com/nicola-decao/s-vae.
We demonstrate the utility of replacing the normal distribution with the von Mises-Fisher distribution for generating latent representations by conducting a range of experiments in three distinct settings. First, we show that our -VAEs outperform VAEs with the Gaussian variational posterior (-VAEs) in recovering a hyperspherical latent structure. Second, we conduct a thorough comparison with -VAEs on the MNIST dataset through an unsupervised learning task and a semi-supervised learning scenario. Finally, we show that -VAEs can significantly improve link prediction performance on citation network datasets in combination with aVariational Graph Auto-Encoder (VGAE) (Kipf and Welling, 2016).
In the VAE setting, we have a latent variable model for data, where denotes latent variables,
is a vector ofobserved variables, and
is a parameterized model of the joint distribution. Our objective is to optimize the log-likelihood of the data,. When
is parameterized by a neural network, marginalizing over the latent variables is generally intractable. One way of solving this issue is to maximize the Evidence Lower Bound (ELBO)
where is the approximate posterior distribution, belonging to a family . The bound is tight if , meaning is optimized to approximate the true posterior. While in theory should be optimized for every data point , to make inference more scalable to larger datasets the VAE setting introduces an inference network
parameterized by a neural network that outputs a probability distribution for each data point. The final objective is therefore to maximize
In the original VAE both the prior and the posterior are defined as normal distributions. We can further efficiently approximate the ELBO by Monte Carlo estimates, using thereparameterization trick (Kingma and Welling, 2013; Rezende et al., 2014). This is done by expressing a sample of , as , where is a reparameterization transformation and
is some noise random variable independent from.
In low dimensions, the Gaussian density presents a concentrated probability mass around the origin, encouraging points to cluster in the center. This is particularly problematic when the data is divided into multiple clusters. Although an ideal latent space should separate clusters for each class, the normal prior will encourage all the cluster centers towards the origin. An ideal prior would only stimulate the variance of the posterior without forcing its mean to be close to the center. A prior satisfying these properties is a uniform over the entire space. Such a uniform prior, however, is not well defined on the hyperplane.
It is a well-known phenomenon that the standard Gaussian distribution in high dimensions tends to resemble a uniform distribution on the surface of a hypersphere, with the vast majority of its mass concentrated on the hyperspherical shell. Hence it would appear interesting to compare the behavior of a Gaussian approximate posterior with an approximate posterior already naturally defined on the hypersphere. This is also motivated from a theoretical point of view, since the Gaussian definition is based on the
norm that suffers from the curse of dimensionality.
Once we let go of the hyperplanar assumption, the possibility of a uniform prior on the hypersphere opens up. Mirroring our discussion in the previous subsection, such a prior would exhibit no pull towards the origin allowing clusters of data to evenly spread over the surface with no directional bias. Additionally, in higher dimensions, the cosine similarity is a more meaningful distance measure than the Euclidean norm.
In general, exploring VAE models that allow a mapping to distributions in a latent space not homeomorphic to is of fundamental interest. Consider data lying in a small -dimensional manifold , embedded in a much higher dimensional space . For most real data, this manifold will likely not be homeomorphic to . An encoder can be considered as a smooth map from the original space to . The restriction of the encoder to , will also be a smooth mapping. However since is not homeomorphic to if , then cannot be a homeomorphism. That is, there exists no invertible and globally continuous mapping between the coordinates of and the ones of . Conversely if then can be smoothly embedded in for sufficiently large 222By the Whitney embedding theorem any smooth real -dimensional manifold can be smoothly embedded in , such that is a homeomorphism and denotes the embedding of . Yet, since , when taking random points in the latent space they will most likely not be in resulting in a poorly reconstructed sample.
The VAE tries to solve this problem by forcing to be mapped into an approximate posterior distribution that has support in the entire . Clearly, this approach is bound to fail since the two spaces have a fundamentally different structure. This can likely produce two behaviors: first, the VAE could just smooth the original embedding leaving most of the latent space empty, leading to bad samples. Second, if we increase the KL term the encoder will be pushed to occupy all the latent space, but this will create instability and discontinuity, affecting the convergence of the model. To validate our intuition we performed a small proof of concept experiment using , which is visualized in Figure 1. Note that as expected the auto-encoder in Figure 1(b) mostly recovers the original latent space of Figure 1(a) as there are no distributional restrictions. In Figure 1(c) we clearly observe for the -VAE that points collapse around the origin due to the KL, which is much less pronounced in Figure 1(d) when its contribution is scaled down. Lastly, the -VAE almost perfectly recovers the original circular latent space. The observed behavior confirms our intuition.
To solve this problem the best option would be to directly specify a homeomorphic to and distributions on . However, for real data discovering the structure of will often be a difficult inference task. Nevertheless, we believe this shows investigating VAE architectures that map to posterior distributions defined on manifolds different than the Euclidean space is a topic worth exploring.
The von Mises-Fisher (vMF) distribution is often seen
as the Normal Gaussian distribution on a hypersphere. Analogous to a Gaussian, it is parameterized by indicating the mean direction, and the concentration around . For the special case of
, the vMF represents a Uniform distribution. The probability density function of the vMF distribution for a random unit vector(or ) is then defined as
where , is the normalizing constant, and denotes the modified Bessel function of the first kind at order .
As previously emphasized, one of the main advantages of using the vMF distribution as an approximate posterior is that we are able to place a uniform prior on the latent space. The KL divergence term to be optimized is:
see Appendix B for complete derivation. Notice that since the KL term does not depend on , this parameter is only optimized in the reconstruction term. The above expression cannot be handled by automatic differentiation packages because of the modified Bessel function in . Thus, to optimize this term we derive the gradient with respect to the concentration parameter:
where the modified Bessel functions can be computed without numerical instabilities using the exponentially scaled modified Bessel function.
To sample from the vMF we follow the procedure of Ulrich (1984), outlined in Algorithm 1. We first sample from a vMF with modal vector . Since the vMF density is uniform in all the dimensional sub-hyperspheres , the sampling technique reduces to sampling the value from the univariate density , using an acceptance-rejection scheme. After getting a sample from an orthogonal transformation is applied such that the transformed sample is distributed according to . This can be achieved using a Householder reflection such that . A more in-depth explanation of the sampling technique can be found in Appendix A.
It is worth noting that the sampling technique does not suffer from the curse of dimensionality, as the acceptance-rejection procedure is only applied to a univariate distribution. Moreover in the case of , the density reduces to which can be directly sampled without rejection.
While the reparameterization trick is easily implementable in the normal case, unfortunately it can only be applied to a handful of distributions. However a recent technique introduced by Naesseth et al. (2017) allows to extend the reparameterization trick to the wide class of distributions that can be simulated using rejection sampling. Dropping the dependence from for simplicity, assume the approximate posterior is of the form and that it can be sampled by making proposals from . If the proposal distribution can be reparameterized we can still perform the reparameterization trick. Let , and , a reparameterization of the proposal distribution, . Performing the reparameterization trick for is made possible by the fundamental lemma proven in (Naesseth et al., 2017): Let be any measurable function and the distribution of the accepted sample. Then:
Then the gradient can be taken using the log derivative trick:
However, in the case of the vMF a different procedure is required. After performing the transformation and accepting/rejecting the sample, we sample another random variable , and then apply a transformation , such that is distributed as the approximate posterior (in our case a vMF). Effectively this entails applying another reparameterization trick after the acceptance/rejection step. To still be able to perform the reparameterization we show that Lemma 3.4 fundamentally still holds in this case as well. Let be any measurable function and the distribution of the accepted sample. Also let , and a transformation that depends on the parameters such that if with , then :
See Appendix C. ∎
The majority of VAE extensions focus on increasing the flexibility of the approximate posterior. This is usually achieved through normalizing flows (Rezende and Mohamed, 2015), a class of invertible transformations applied sequentially to an initial reparameterizable density , allowing for more complex posteriors. Normalizing flows can be considered orthogonal to our approach. While allowing for a more flexible posterior, they do not modify the standard normal prior assumption. In (Gemici et al., 2016) a first attempt is made to extend normalizing flows to Riemannian manifolds. However, as the method relies on the existence of a diffeomorphism between and , it is unsuited for hyperspheres.
One approach to obtain a more flexible prior is to use a simple mixture of Gaussians (MoG) prior (Dilokthanakul et al., 2016). The recently introduced VampPrior model (Tomczak and Welling, 2018) outlines several advantages over the MoG and instead tries to learn a more flexible prior by expressing it as a mixture of approximate posteriors. A non-parametric prior is proposed in Nalisnick and Smyth (2017), utilizing a truncated stick-breaking process. Opposite to these approaches, we aim at using a non-informative prior to simplify the inference.
The closest approach to ours is a VAE with a vMF distribution in the latent space used for a sentence generation task by (Guu et al., 2018). While formally this approach is cast as a variational approach, the proposed model does not reparameterize and learn the concentration parameter , treating it as a constant value that remains the same for every approximate posterior instead. Critically, as indicated in Equation 5, the KL divergence term only depends on therefore leaving constant means never explicitly optimizing the KL divergence term in the loss. The method then only optimizes the reconstruction error by adding vMF noise to the encoder output in the latent space to still allow generation. Moreover, using a fixed global for all the approximate posteriors severely limits the flexibility and the expressiveness of the model.
Summary of results (mean and standard-deviation over 10 runs) of unsupervised model on MNIST. RE and KL correspond respectively to the reconstruction and the KL part of the ELBO. Best results are highlighted only if they passed a student t-test with.
In Liu and Zhu (2018)
, a general model to perform Bayesian inference in Riemannian Manifolds is proposed. Following other Stein-related approaches, the method does not explicitly define a posterior density but approximates it with a number of particles. Despite its generality and flexibility, it requires the choice of a kernel on the manifold and multiple particles to have a good approximation of the posterior distribution. The former is not necessarily straightforward, while the latter quickly becomes computationally unfeasible.
Another approach by Nickel and Kiela (2017), capitalizes on the hierarchical structure present in some data types. By learning the embeddings for a graph in a non-euclidean negative curvature hyperbolical space, they show this topology has clear advantages over embedding these objects in a Euclidean space. Although they did not use a VAE-based approach, that is, they did not build a probabilistic generative model of the data interpreting the embeddings as latent variables, this approach shows the merit of explicitly adjusting the choice of latent topology to the data used.
As noted before, a distinction must be made between models dealing with the challenges of intrinsically hyperspherical data like omnidirectional video, and those attempting to exploit some latent hyperspherical manifold. A recent example of the first can be found in Cohen et al. (2018), where spherical CNNs are introduced. While flattening a spherical image produces unavoidable distortions, the newly defined convolutions take into account its geometrical properties.
The most general implementation of the second model type was proposed by Gopal and Yang (2014)
, who introduced a suite of models to improve cluster performance of high-dimensional data based on mixture of vMF distributions. They showed that reducing an object representation to its directional components increases clusterability over standard methods like-Means or Latent Dirichlet Allocation (Blei et al., 2003).
Specific applications of the vMF can be further found ranging from computer vision, where it is used to infer structure from motion(Guan and Smith, 2017) in spherical video, or structure from texture (Wilson et al., 2014)
, to natural language processing, where it is utilized in text analysis(Banerjee et al., 2003, 2005) and topic modeling (Banerjee and Basu, 2007; Reisinger et al., 2010).
Additionally, modeling data by restricting it to a hypersphere provides some natural regularizing properties as noted in (Liu et al., 2017). Finally Aytekin et al. (2018) show on a variety of deep auto-encoder models that adding L2 normalization to the latent space during training, i.e. forcing the latent space on a hypersphere, improves clusterability.
In this section, we first perform a series of experiments to investigate the theoretical properties of the proposed -VAE compared to the -VAE. In a second experiment, we show how -VAEs can be used in semi-supervised tasks to create a better separable latent representation to enhance classification. In the last experiment, we show that the -VAE indeed presents a promising alternative to -VAEs for data with a non-Euclidean latent representation of low dimensionality, on a link prediction task for three citation networks. All architecture and hyperparameter details are given in AppendixF.
In this first experiment we build on the motivation developed in Subsection 2.3, by confirming with a synthetic data example the difference in behavior of the -VAE and -VAE in recovering latent hyperspheres. We first generate samples from a mixture of three vMFs on the circle, , as shown in Figure 1(a), which subsequently are mapped into the higher dimensional
by applying a noisy, non-linear transformation. After this, we in turn train an auto-encoder, a -VAE, and a -VAE. We further investigate the behavior of the -VAE, by training a model using a scaled down KL divergence.
The resulting latent spaces, displayed in Figure 1, clearly confirm the intuition built in Subsection 2.3. As expected, in Figure 1(b) the auto-encoder is perfectly capable to embed in low dimensions the original underlying data structure. However, most parts of the latent space are not occupied by points, critically affecting the ability to generate meaningful samples.
In the -VAE setting we observe two types of behaviours, summarized by Figures 1(c) and 1(d). In the first we observe that if the prior is too strong it will force the posterior to match the prior shape, concentrating the samples in the center. However, this prevents the -VAE to correctly represent the true shape of the data and creates instability problems for the decoder around the origin. On the contrary, if we scale down the KL term, we observe that the samples from the approximate posterior maintain a shape that reflects the structure smoothed with Gaussian noise. However, as the approximate posterior differs strongly from the prior, obtaining meaningful samples from the latent space again becomes problematic.
The -VAE on the other hand, almost perfectly recovers the original dataset structure, while the samples from the approximate posterior closely match the prior distribution. This simple experiment confirms the intuition that having a prior that matches the true latent structure of the data, is crucial in constructing a correct latent representation that preserves the ability to generate meaningful samples.
To compare the behavior of the -VAE and
-VAE on a data set that does not have a clear hyperspherical latent structure, we evaluate both models on a reconstruction task using dynamically binarized MNIST(Salakhutdinov and Murray, 2008). We analyze the ELBO, KL, negative reconstruction error, and marginal log-likelihood (LL) for both models on the test set. The LL is estimated using importance sampling with 500 sample points (Burda et al., 2016).
Results are shown in Table 1. We first note that in terms of negative reconstruction error the -VAE outperforms the -VAE in all dimensions. Since the -VAE uses a uniform prior, the KL divergence increases more strongly with dimensionality, which results in a higher ELBO. However in terms of log-likelihood (LL) the -VAE clearly has an edge in low dimensions () and performs comparable to the -VAE in . This empirically confirms the hypothesis of Subsection 2.2, showing the positive effect of having a uniform prior in low dimensions. In the absence of any origin pull, the data is able to cluster naturally, utilizing the entire latent space which can be observed in Figure 2. Note that in Figure 2(a) all mass is concentrated around the center, since the prior mean is zero. Conversely, in Figure 2(b) all available space is evenly covered due to the uniform prior, resulting in more separable clusters in compared to . However, as dimensionality increases, the Gaussian distribution starts to approximate a hypersphere, while its posterior becomes more expressive than the vMF due to the higher number of variance parameters. Simultaneously, as described in Subsection 3.5, the surface area of the vMF starts to collapse limiting the available space.
In Figure 7 and 8 of Appendix G, we present randomly generated samples from the -VAE and the -VAE, respectively. Moreover, in Figure 9 of Appendix G, we show 2-dimensional manifolds for the two models. Interestingly, the manifold given by the -VAE indeed results in a latent space where digits occupy the entire space and there is a sense of continuity from left to right.
Having observed the -VAE’s ability to increase clusterability of data points in the latent space, we wish to further investigate this property using a semi-supervised classification task. For this purpose we re-implemented the M1 and M1+M2 models as described in (Kingma et al., 2014)
, and evaluate the classification accuracy of the -VAE and the -VAE on dynamically binarized MNIST. In the M1 model, a classifier utilizes the latent features obtained using a VAE as in experiment5.2. The M1+M2 model is constructed by stacking the M2 model on top of M1, where M2 is the result of augmenting the VAE by introducing a partially observed variable , and combining the ELBO and classification objective. This concatenated model is trained end-to-end 333It is worth noting that in the original implementation by Kingma et al. (2014) the stacked model did not converge well using end-to-end training, and used the extracted features of the M1 model as inputs for the M2 model instead..
This last model also allows for a combination of the two topologies due to the presence of two distinct latent variables, and . Since in the M2 latent space the class assignment is expressed by the variable , while only needs to capture the style, it naturally follows that the -VAE is more suited for this objective due to its higher number of variance parameters. Hence, besides comparing the -VAE against the -VAE, we additionally run experiments for the M1+M2 model by modeling , respectively with a vMF and normal distribution.
As can be see in Table 2, for M1 the -VAE outperforms the -VAE in all dimensions up to . This result is amplified for a low number of observed labels. Note that for both models absolute performance drops as the dimensionality increases, since -NN used as the classifier suffers from the curse of dimensionality. Besides reconfirming superiority of the -VAE in , its better performance than the -VAE for was unexpected. This indicates that although the log-likelihood might be comparable(see Table 1) for higher dimensions, the -VAE latent space better captures the cluster structure.
In the concatenated model M1+M2, we first observe in Table 3 that either the pure -VAE or the +-VAE model yields the best results, where the -VAE almost always outperforms the -VAE. Our hypothesis regarding the merit of a +-VAE model is further confirmed, as displayed by the stable, strong performance across all different dimensions. Furthermore, the clear edge in clusterability of the -VAE in low dimensional as already observed in Table 2, is again evident. As the dimensionality of increases, the accuracy of the -VAE improves, reducing the performance gap with the -VAE. As previously noticed the -VAE performance drops when , with the best result being obtained for . In fact, it is worth noting that for this setting the -VAE obtains comparable results to the original settings of (Kingma et al., 2014), while needing a considerably smaller latent space. Finally, the end-to-end trained +-VAE model is able to reach a significantly higher classification accuracy than the original results reported by Kingma et al. (2014), 96.7.
The M1+M2 model allows for conditional generation. Similarly to (Kingma et al., 2014), we set the latent variable to the value inferred from the test image by the inference network, and then varied the class label . In Figure 10 of Appendix H we notice that the model is able to disentangle the style from the class.
In this experiment, we aim at demonstrating the ability of the -VAE to learn meaningful embeddings of nodes in a graph, showing the advantages of embedding objects in a non-Euclidean space. We test hyperspherical reparameterization on the recently introduced Variational Graph Auto-Encoder (VGAE) (Kipf and Welling, 2016), a VAE model for graph-structured data. We perform training on a link prediction task on three popular citation network datasets (Sen et al., 2008): Cora, Citeseer and Pubmed.
Dataset statistics and further experimental details are summarized in Appendix F.3. The models are trained in an unsupervised fashion on a masked version of these datasets where some of the links have been removed. All node features are provided and efficacy is measured in terms of average precision (AP) and area under the ROC curve (AUC) on a test set of previously removed links. We use the same training, validation, and test splits as in Kipf and Welling (2016), i.e. we assign 5% of links for validation and 10% of links for testing.
In Table 4, we show that our model outperforms the -VGAE baseline on two out of the three datasets by a significant margin. The log-probability of a link is computed as the dot product of two embeddings. In a hypersphere, this can be interpreted as the cosine similarity between vectors. Indeed we find that the choice of a dot product scoring function for link prediction is problematic in combination with the normal distribution on the latent space. If embeddings are close to the zero-center, noise during training can have a large destabilizing effect on the angle information between two embeddings. In practice, the model finds a solution where embeddings are ”pushed” away from the zero-center, as demonstrated in Figure 3(a). This counteracts the pull towards the center arising from the standard prior and can overall lead to poor modeling performance. By constraining the embeddings to the surface of a hypersphere, this effect is mitigated, and the model can find a good separation of the latent clusters, as shown in Figure 3(b).
On Pubmed, we observe that the -VAE converges to a lower score than the -VAE. The Pubmed dataset is significantly larger than Cora and Citeseer, and hence more complex. The -VAE has a larger number of variance parameters for the posterior distribution, which might have played an important role in better modeling the relationships between nodes. We further hypothesize that not all graphs are necessarily better embedded in a hyperspherical space and that this depends on some fundamental topological properties of the graph. For instance, the already mentioned work from Nickel and Kiela (2017) shows that hyperbolical space is better suited for graphs with a hierarchical, tree-like structure. These considerations prefigure an interesting research direction that will be explored in future work.
With the -VAE we set an important first step in the exploration of hyperspherical latent representations for variational auto-encoders. Through various experiments, we have shown that -VAEs have a clear advantage over -VAEs for data residing on a known hyperspherical manifold, and are competitive or surpass -VAEs for data with a non-obvious hyperspherical latent representation in lower dimensions. Specifically, we demonstrated -VAEs improve separability in semi-supervised classification and that they are able to improve results on state-of-the-art link prediction models on citation graphs, by merely changing the prior and posterior distributions as a simple drop-in replacement.
We believe that the presented research paves the way for various promising areas of future work, such as exploring more flexible approximate posterior distributions through normalizing flows on the hypersphere, or hierarchical mixture models combining hyperspherical and hyperplanar space. Further research should be done in increasing the performance of -VAEs in higher dimensions; one possible solution of which could be to dynamically learn the radius of the latent hypersphere in a full Bayesian setting.
We would like to thank Rianne van den Berg, Jonas Köhler, Pim de Haan, Taco Cohen, Marco Federici, and Max Welling for insightful discussions. T.K. is supported by SAP SE Berlin. J.M.T. was funded by the European Commission within the Marie Skłodowska-Curie Individual Fellowship (Grant No. 702666, ”Deep learning and Bayesian inference for medical imaging”).
Stochastic backpropagation and approximate inference in deep generative models.In ICML, pages 1278–1286.
On the quantitative analysis of deep belief networks.ICML, pages 872–879.
The general algorithm for sampling from a vMF has been outlined in Algorithm 1. The exact form of the distribution of the univariate distribution is:
Samples from this distribution are drawn using an acceptance/rejection algorithm when . The complete procedure is described in Algorithm 2. The reflection (see Algorithm 3 for details) simply finds an orthonormal transformation that maps the modal vector to . Since an orthonormal transformation preserves the distances all the points in the hypersphere will stay in the surface after mapping. Notice that even the transform , can be executed in by rearranging the terms.
The KL divergence between a von-Mises-Fisher distribution and an uniform distribution in the hypersphere (one divided by the surface area of ) is:
Notice that we can use for numerical stability.
 Let be any measurable function and the distribution of the accepted sample. Also let , and a transformation that depends on the parameters such that if with , then :
Using the same argument employed by Naesseth et al. (2017) we can apply the change of variables rewrite the expression as:
Where in * we applied the change of variables . ∎
where is the reparameterization term and the correction term. Since is invertible in , Naesseth et al. (2017) show that in simplifies to:
In our specific case we want to take the gradient w.r.t. of the expression:
The gradient can be computed using the Lemma 3.4 and the subsequent gradient derivation with . As specified in Section 3.4 we optimize unbiased Monte Carlo estimates of the gradient. Therefore fixed one datapoint and sampled the gradient is:
where is simply the gradient of the reconstruction loss w.r.t and can be easily handled by automatic differentiation packages.
For what concerns we notice that the terms and do not depend on . Thus the term w.r.t. is an all the following calculations can will be only w.r.t. . We therefore have:
So, putting everything together we have:
And the term can be computed by automatic differentiation packages.
For both the encoder and the decoder we use MLPs with 2 hidden layers of respectively, [256, 128] and [128, 256] hidden units. We trained until convergence using early-stopping with a look ahead of 50 epochs. We used the Adam optimizer(Kingma and Ba, 2015) with a learning rate of 1e-3, and mini-batches of size 64. Additionally, we used a linear warm-up for 100 epochs (Bowman et al., 2016). The weights of the neural network were initialized according to (Glorot and Bengio, 2010).
For M1 we reused the trained models of the previous experiment, and used -nearest neighbors (-NN) as a classifier with . In the -VAE case we used the Euclidean distance as a distance metric. For the -VAE the geodesic distance was employed. The performance was evaluated for observed labels.
The stacked M1+M2 model uses the same architecture as outlined by Kingma et al. (2014), where the MLPs utilized in the generative and inference models are constructed using a single hidden layer, each with 500 hidden units. The latent space dimensionality of , were both varied in
. We used the rectified linear unit (ReLU) as an activation function. Training was continued until convergence using early-stopping with a look ahead of 50 epochs on the validation set. We used the Adam optimizer with a learning rate of 1e-3, and mini-batches of size 100. All neural network weight were initialized according to(Glorot and Bengio, 2010). was set to 100, and the parameter used to scale the classification loss was chosen between . Crucially, we train this model end-to-end instead of by parts.
We are training a Variational Graph Auto-encoder (VGAE) model, a state-of-the-art link prediction model for graphs, as proposed in Kipf and Welling (2016). For a fair comparison, we use the same architecture as in the original paper and we just change the way the latent space is generated using the vMF distribution instead of a normal distribution. All models are trained for 200 epochs on Cora and Citeseer, and 400 epochs on Pubmed with the Adam optimizer. Optimal learning rate , dropout rate and number of latent dimensions are determined via grid search based on validation AUC performance. For -VGAE, we omit the setting as some of our experiments ran out of memory. The model is trained with a single hidden layer with units and with document features as input, as in Kipf and Welling (2016). The weights of the neural network were initialized according to (Glorot and Bengio, 2010). For testing, we report performance of the model selected from the training epoch with highest AUC score on the validation set. Different from (Kipf and Welling, 2016)
, we train both the -VGAE and the -VGAE models using negative sampling in order to speed up training, i.e. for each positive link we sample, uniformly at random, one negative link during every training epoch. All experiments are repeated 5 times, and we report mean and standard error values.