Mixed-curvature Variational Autoencoders

by   Ondrej Skopek, et al.

It has been shown that using geometric spaces with non-zero curvature instead of plain Euclidean spaces with zero curvature improves performance on a range of Machine Learning tasks for learning representations. Recent work has leveraged these geometries to learn latent variable models like Variational Autoencoders (VAEs) in spherical and hyperbolic spaces with constant curvature. While these approaches work well on particular kinds of data that they were designed for e.g. tree-like data for a hyperbolic VAE, there exists no generic approach unifying all three models. We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature can be learned. This generalizes the Euclidean VAE to curved latent spaces, as the model essentially reduces to the Euclidean VAE if curvatures of all latent space components go to 0.


page 38

page 41


Geometry of Deep Generative Models for Disentangled Representations

Deep generative models like variational autoencoders approximate the int...

Adversarial Autoencoders with Constant-Curvature Latent Manifolds

Constant-curvature Riemannian manifolds (CCMs) have been shown to be ide...

Switch Spaces: Learning Product Spaces with Sparse Gating

Learning embedding spaces of suitable geometry is critical for represent...

Curved Geometric Networks for Visual Anomaly Recognition

Learning a latent embedding to understand the underlying nature of data ...

Variational Autoencoders with Riemannian Brownian Motion Priors

Variational Autoencoders (VAEs) represent the given data in a low-dimens...

Wrapped Distributions on homogeneous Riemannian manifolds

We provide a general framework for constructing probability distribution...

Multi-Step Prediction in Linearized Latent State Spaces for Representation Learning

In this paper, we derive a novel method as a generalization over LCEs su...