Nonparametric Inference for Auto-Encoding Variational Bayes

12/18/2017
by   Erik Bodin, et al.
0

We would like to learn latent representations that are low-dimensional and highly interpretable. A model that has these characteristics is the Gaussian Process Latent Variable Model. The benefits and negative of the GP-LVM are complementary to the Variational Autoencoder, the former provides interpretable low-dimensional latent representations while the latter is able to handle large amounts of data and can use non-Gaussian likelihoods. Our inspiration for this paper is to marry these two approaches and reap the benefits of both. In order to do so we will introduce a novel approximate inference scheme inspired by the GP-LVM and the VAE. We show experimentally that the approximation allows the capacity of the generative bottle-neck (Z) of the VAE to be arbitrarily large without losing a highly interpretable representation, allowing reconstruction quality to be unlimited by Z at the same time as a low-dimensional space can be used to perform ancestral sampling from as well as a means to reason about the embedded data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2016

Stick-Breaking Variational Autoencoders

We extend Stochastic Gradient Variational Bayes to perform posterior inf...
research
10/20/2020

Sparse Gaussian Process Variational Autoencoders

Large, multi-dimensional spatio-temporal datasets are omnipresent in mod...
research
02/17/2020

πVAE: Encoding stochastic process priors with variational autoencoders

Stochastic processes provide a mathematically elegant way model complex ...
research
06/17/2020

Longitudinal Variational Autoencoder

Longitudinal datasets measured repeatedly over time from individual subj...
research
10/02/2022

Loc-VAE: Learning Structurally Localized Representation from 3D Brain MR Images for Content-Based Image Retrieval

Content-based image retrieval (CBIR) systems are an emerging technology ...
research
12/13/2018

Gaussian Process Deep Belief Networks: A Smooth Generative Model of Shape with Uncertainty Propagation

The shape of an object is an important characteristic for many vision pr...
research
12/22/2021

Emulation of greenhouse-gas sensitivities using variational autoencoders

Flux inversion is the process by which sources and sinks of a gas are id...

Please sign up or login with your details

Forgot password? Click here to reset