Mixed-curvature Variational Autoencoders

11/19/2019
by   Ondrej Skopek, et al.
0

It has been shown that using geometric spaces with non-zero curvature instead of plain Euclidean spaces with zero curvature improves performance on a range of Machine Learning tasks for learning representations. Recent work has leveraged these geometries to learn latent variable models like Variational Autoencoders (VAEs) in spherical and hyperbolic spaces with constant curvature. While these approaches work well on particular kinds of data that they were designed for e.g. tree-like data for a hyperbolic VAE, there exists no generic approach unifying all three models. We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature can be learned. This generalizes the Euclidean VAE to curved latent spaces, as the model essentially reduces to the Euclidean VAE if curvatures of all latent space components go to 0.

READ FULL TEXT

page 38

page 41

02/19/2019

Geometry of Deep Generative Models for Disentangled Representations

Deep generative models like variational autoencoders approximate the int...
12/11/2018

Adversarial Autoencoders with Constant-Curvature Latent Manifolds

Constant-curvature Riemannian manifolds (CCMs) have been shown to be ide...
02/17/2021

Switch Spaces: Learning Product Spaces with Sparse Gating

Learning embedding spaces of suitable geometry is critical for represent...
08/02/2022

Curved Geometric Networks for Visual Anomaly Recognition

Learning a latent embedding to understand the underlying nature of data ...
02/12/2020

Variational Autoencoders with Riemannian Brownian Motion Priors

Variational Autoencoders (VAEs) represent the given data in a low-dimens...
04/20/2022

Wrapped Distributions on homogeneous Riemannian manifolds

We provide a general framework for constructing probability distribution...
09/02/2022

Multi-Step Prediction in Linearized Latent State Spaces for Representation Learning

In this paper, we derive a novel method as a generalization over LCEs su...