A Geometric Perspective on Variational Autoencoders

by   Clément Chadebec, et al.

This paper introduces a new interpretation of the Variational Autoencoder framework by taking a fully geometric point of view. We argue that vanilla VAE models unveil naturally a Riemannian structure in their latent space and that taking into consideration those geometrical aspects can lead to better interpolations and an improved generation procedure. This new proposed sampling method consists in sampling from the uniform distribution deriving intrinsically from the learned Riemannian latent space and we show that using this scheme can make a vanilla VAE competitive and even better than more advanced versions on several benchmark datasets. Since generative models are known to be sensitive to the number of training samples we also stress the method's robustness in the low data regime.


page 6

page 8

page 17

page 18

page 19

page 22

page 26


Fully Spiking Variational Autoencoder

Spiking neural networks (SNNs) can be run on neuromorphic devices with u...

AriEL: volume coding for sentence generation

Mapping sequences of discrete data to a point in a continuous space make...

MAD-VAE: Manifold Awareness Defense Variational Autoencoder

Although deep generative models such as Defense-GAN and Defense-VAE have...

Data Augmentation in High Dimensional Low Sample Size Setting Using a Geometry-Based Variational Autoencoder

In this paper, we propose a new method to perform data augmentation in a...

Data Generation in Low Sample Size Setting Using Manifold Sampling and a Geometry-Aware VAE

While much efforts have been focused on improving Variational Autoencode...

Variational Autoencoders with Riemannian Brownian Motion Priors

Variational Autoencoders (VAEs) represent the given data in a low-dimens...

Generalizing Variational Autoencoders with Hierarchical Empirical Bayes

Variational Autoencoders (VAEs) have experienced recent success as data-...