Following the manifold hypothesis, in unsupervised generative learning we strive to recover a distribution on a (low-dimensional) latent manifold, capable of explaining observed, high-dimensional data, e.g. images. One of the most popular frameworks to achieve this goal is the Variational Auto-Encoder (VAE)(kingma-vae13, ; pmlr-v32-rezende14, ), a latent variable model which combines variational inference and auto-encoding to directly optimize the parameters of some latent distribution. While originally restricted to ‘flat’ space using the classic Gaussian normal distribution, there has recently been a surge in research extending the VAE to distributions defined on manifolds with non-trivial topologies (s-vae, ; falorsi2018explorations, ; mathieu2019hierarchical, ; hyperbolic-exp-map-reparam19, ; diffusion-vae19, ; implicit_reparam_figurnov, ; relie-aistats19, ). This is fruitful, as most data is not best represented by distributions on flat space, which can lead to undesired ‘manifold mismatch’ behavior.
In (s-vae, )
, the authors propose a hyperspherical parameterization of the VAE using a von Mises-Fisher distribution, demonstrating the improved results over the especially bad pairing of the ‘blob-like’ Gaussian distribution and hyperspherical data. Surprisingly, they further show that these positive results extend to datasetswithout
a clear hyperspherical interpretation, which they mostly attribute to the restricted surface area and the absence of a ‘mean-biased’ prior in the vMF as the Uniform distribution is feasible in the compact, hyperspherical space. However, as dimensionality increases performance begins to decrease. This could possibly be explained by taking a closer look at the vMF’s functional form
where , a scalar, is the normalizing constant, and denotes the modified Bessel function of the first kind at order . Note that the scalar concentration parameter is fixed in all dimensions, severely limiting the distribution’s expressiveness as dimensionality increases.
2 Method: A Hyperspherical Product-Space
To improve on the vMF’s per-dimension concentration flexibility limitation we propose a simple idea: breaking up the single latent hyperspherical assumption, into a concatenation of multiple independent hyperspherical distributions. Such a compositional construction increases flexibility through the addition of a new concentration parameter for each hypersphere, as well as providing the possibility of sub-structure forming . Given a hyperspherical random variable, we want to choose in respectively s.t. , and , where denotes concatenation. The probabilistic model becomes:
which factorizes in (*) if we assume independence between the new sub-structures. Assuming conditional independence of the approximate posterior as well, i.e. , it can be shown111See Appendix A for derivation.
that the Kullback-Leibler divergence simplifies as
Given a single hypersphere and keeping the ambient space fixed, for each additional ‘break’, a degree of freedom is exchanged for a concentration parameter. In the base case of k+1, we can potentially support‘independent’ feature dimensions, that must share a single concentration parameter
, and hence are globally restricted in their flexibility per dimension. On the other hand, the moment we break k+1 up in the Cartesian cross-product of, we ‘lose’ an independent dimensions (or degree of freedom), but in exchange the two resulting sub-hyperspheres have to share their concentration parameters over fewer dimensions increasing flexibility222In the most extreme case, this will lead to a latent space of - which is equal to the -Torus..
The reason a vMF is uniquely suited for such a decomposition as opposed to a Gaussian, is that assuming a factorized variance the Gaussian distribution is already equipped with a concentration parameter for each dimension. However, in the case of the vMF, which has only a single concentration parameterfor all dimensions, we gain flexibility. This is an important distinction: while all dimensions are implicitly connected through the shared loss objective in both cases, in the case of the vMF this connection is amplified through the direct connection of the shared concentration parameter.
The work closest to our model is that of (paquet2018factorial, )hoffman2013stochastic, ; johnson2016composing, ) on structured VAEs, and extend the work on VAEs with single GMMs of (nalisnick2016approximate, ; dilokthanakul2016deep, ; jiang2017variational, ). Partially following similar motivations to our work, the authors hypothesize and empirically show the structured compositionality encourages disentanglement. By working with GMMs instead of single Gaussians, they circumvent the factorized single Gaussian break-up limitation described before. Another recent work proposing to break up a large, single latent representation into a composition of sub-structures in the context of Bayesian optimization is (combo-oh19, ).
3 Experiments and Discussion
To test the ability of a hyperspherical product-space model to improve performance over its single-shell counterpart, we perform product-space interpolations breaking up a single shell into an increasing amount of independent components.
We conduct experiments on Static MNIST, Omniglot (lake2015human, ), and Caltech 101 Silhouettes (marlin2010inductive, ) mostly following the experimental setup of (s-vae, ), using a simple MLP encoder-decoder architecture with ReLU()
activations between layers. We train for 300 epochs using early-stopping with a look-ahead of 50 epochs, and a linearwarm-up scheme of 100 epochs as per (bowman2015, ; sonderby2016ladder, ), during which the KL divergence is annealed from 0 to (higgins2017beta, ; alemi2018fixing, )
. Marginal log-likelihood is estimated using importance sampling with 500 sample points per(burda2015importance, ), reporting the mean over three random seeds.
Keeping in mind the flexibility trade-off consideration, we analyze both the effects of keeping the total degrees of freedom fixed (increasing ambient space dimensionality), as well as the case of keeping the ambient space fixed (decreasing the degrees of freedom). We break up 40 respectively into 2, 4, 6, 10, and 40 sub-spaces. In each break-up, we try a balanced, leveled, and unbalanced hyperspherical composition.
A summary of best results for fixed ambient space is shown in Table 1, with a summary of best results for fixed degrees of freedom and complete interpolations in Appendix B. Initial inspection shows that partially breaking up a single 40 hypersphere into a hyperspherical product-space indeed allows us to improve performance for all examined datasets. Diving deeper into the results, we do find that both the number of breaks as well as the dimensional composition of these breaks strongly inform performance and learning stability.
A high number of breaks appears to negatively influence both performance and learning stability. Indeed, for most datasets the ‘Torus’ setting, i.e. full factorization in 1 components proved too unstable to train to convergence. One explanation for this result could be found in the fact that we omit the REINFORCE part of the vMF reparameterization during training333See (s-vae, ), Appendix D for more details.. While only of very limited influence on a single hyperspherical distribution, the accumulated bias across many shells might lead to a non-trivial effect. On the other hand, adding as few as four breaks extends the model’s expressivity enough to outperform a single shell consistently.
Balance of the subspace composition plays a key role as well. We find that when the subspaces are too unbalanced, e.g. 37 v. , the network starts to ‘choose’ between subspace channels. Effectively, it will for example start encoding all information in the 1 shells and completely ignore the 37 shell, leading to an effective latent space of 3 degrees of freedom444For a more extended discussion on the interplay between balance and the KL divergence see Appendix B., see for example Fig. 2(a). On the contrary, better balanced compositions appear capable of cleanly separating semantically meaningful features across shells as displayed in Fig. 2(b).
Conclusion and Future Work
In summary, breaking up a single hypersphere into multiple components effectively increases concentration expressiveness leading to more stable training and improved results. In future work we’d like to investigate the possibility of learning an optimal break-up as opposed to fixing it a-priori, as well as mixing sub-spaces with different topologies.
We would like to thank Luca Falorsi and Nicola De Cao for insightful discussions during the development of this work.
- (1) Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. Fixing a broken elbo. In ICML, pages 159–168, 2018.
- (2) Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Józefowicz, and Samy Bengio. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 10–21, 2016.
Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov.
Importance weighted autoencoders.ICLR, 2016.
- (4) Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. Hyperspherical Variational Auto-Encoders. UAI, 2018.
- (5) Nat Dilokthanakul, Pedro AM Mediano, Marta Garnelo, Matthew CH Lee, Hugh Salimbeni, Kai Arulkumaran, and Murray Shanahan. Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648, 2016.
- (6) Luca Falorsi, Pim de Haan, Tim R Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, and Taco S Cohen. Explorations in homeomorphic variational auto-encoding. ICML Workshop, 2018.
- (7) Luca Falorsi, Pim de Haan, Tim R Davidson, and Patrick Forré. Reparameterizing distributions on lie groups. AISTATS, 2019.
- (8) Mikhail Figurnov, Shakir Mohamed, and Andriy Mnih. Implicit reparameterization gradients. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, NeurIPS, pages 439–450. Curran Associates, Inc., 2018.
- (9) Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. ICLR, 2017.
- (10) Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.
- (11) Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. In IJCAI, pages 1965––1972, 2017.
Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R
Composing graphical models with neural networks for structured representations and fast inference.In NIPS, pages 2946–2954, 2016.
- (13) Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. CoRR, 2013.
- (14) Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
Benjamin Marlin, Kevin Swersky, Bo Chen, and Nando Freitas.
Inductive principles for restricted boltzmann machine learning.In AISTATS, pages 509–516, 2010.
- (16) Emile Mathieu, Charline Le Lan, Chris J Maddison, Ryota Tomioka, and Yee Whye Teh. Hierarchical representations with poincar’e variational auto-encoders. NeurIPS, 2019.
- (17) Yoshihiro Nagano, Shoichiro Yamaguchi, Yasuhiro Fujita, and Masanori Koyama. A differentiable gaussian-like distribution on hyperbolic space for gradient-based learning. arXiv preprint arXiv:1902.02992, 2019.
Eric Nalisnick, Lars Hertel, and Padhraic Smyth.
Approximate inference for deep latent gaussian mixtures.
NIPS Workshop on Bayesian Deep Learning, volume 2, 2016.
- (19) Changyong Oh, Jakub M. Tomczak, Efstratios Gavves, and Max Welling. Combinatorial bayesian optimization using graph representations. ICML Workshop, 2019.
- (20) Ulrich Paquet, Sumedh K Ghaisas, and Olivier Tieleman. A factorial mixture prior for compositional deep generative models. arXiv preprint arXiv:1812.07480, 2018.
- (21) Luis A. Pérez Rey, Vlado Menkovski, and Jacobus W. Portegies. Diffusion variational auto-encoders. arXiv preprint arXiv:1901.08991, 2019.
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.ICML, pages 1278–1286, 2014.
- (23) Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In NIPS, pages 3738–3746, 2016.
Appendix A Dimensionality Decomposition
Given a latent variable , we choose in respectively s.t. , and , where denotes concatenation. The probabilistic model becomes:
which factorizes in (*) if we assume independence. Assuming conditional independence of the approximate posterior as well, i.e. , the Kullback-Leibler divergence simplifies as
Appendix B Supplementary Tables and Figures
Another way of understanding the importance of balance is by examining the KL divergence form of the vMF and its influence in the loss objective: In order to achieve high quality reconstruction performance, it is necessary for the concentration parameter to concentrate, i.e. take on a high value. Given the Uniform prior setting in which , this logically leads to an increase in the KL-divergence. The crucial observation here however, is that the strength of the KL-divergence is also strongly dependent on the dimensionality as can be observed in Fig. 1. Hence during learning over a product-space containing several lower dimensionality components and a single high dimensionality component, if the reconstruction error can be made sufficiently low using the lower dimensionality components, the optimal loss minimization strategy would be to set the concentration parameter of the largest component to 0, effectively ignoring it. A possible strategy to prevent this from happening could be to set separate
parameters for each hyperspherical component, however we fear that this will quickly blow up the hyperparameter search-space.
b.1 Fixed Ambient Space
b.2 Fixed Degrees of Freedom