Increasing Expressivity of a Hyperspherical VAE

10/07/2019
by   Tim R. Davidson, et al.
32

Learning suitable latent representations for observed, high-dimensional data is an important research topic underlying many recent advances in machine learning. While traditionally the Gaussian normal distribution has been the go-to latent parameterization, recently a variety of works have successfully proposed the use of manifold-valued latents. In one such work (Davidson et al., 2018), the authors empirically show the potential benefits of using a hyperspherical von Mises-Fisher (vMF) distribution in low dimensionality. However, due to the unique distributional form of the vMF, expressivity in higher dimensional space is limited as a result of its scalar concentration parameter leading to a 'hyperspherical bottleneck'. In this work we propose to extend the usability of hyperspherical parameterizations to higher dimensions using a product-space instead, showing improved results on a selection of image datasets.

READ FULL TEXT
research
04/03/2018

Hyperspherical Variational Auto-Encoders

The Variational Auto-Encoder (VAE) is one of the most used unsupervised ...
research
09/30/2022

GM-VAE: Representation Learning with VAE on Gaussian Manifold

We propose a Gaussian manifold variational auto-encoder (GM-VAE) whose l...
research
05/25/2022

RENs: Relevance Encoding Networks

The manifold assumption for high-dimensional data assumes that the data ...
research
06/15/2020

Ordering Dimensions with Nested Dropout Normalizing Flows

The latent space of normalizing flows must be of the same dimensionality...
research
04/16/2023

Manifold Fitting: An Invitation to Statistics

While classical statistics has addressed observations that are real numb...
research
11/02/2019

Beta DVBF: Learning State-Space Models for Control from High Dimensional Observations

Learning a model of dynamics from high-dimensional images can be a core ...

Please sign up or login with your details

Forgot password? Click here to reset