Learnable Explicit Density for Continuous Latent Space and Variational Inference

10/06/2017
by   Chin-Wei Huang, et al.
0

In this paper, we study two aspects of the variational autoencoder (VAE): the prior distribution over the latent variables and its corresponding posterior. First, we decompose the learning of VAEs into layerwise density estimation, and argue that having a flexible prior is beneficial to both sample generation and inference. Second, we analyze the family of inverse autoregressive flows (inverse AF) and show that with further improvement, inverse AF could be used as universal approximation to any complicated posterior. Our analysis results in a unified approach to parameterizing a VAE, without the need to restrict ourselves to use factorial Gaussians in the latent real space.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2021

Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness

As one of the most popular generative models, Variational Autoencoder (V...
research
09/09/2019

Neural Gaussian Copula for Variational Autoencoder

Variational language models seek to estimate the posterior of latent var...
research
05/23/2018

Amortized Inference Regularization

The variational autoencoder (VAE) is a popular model for density estimat...
research
06/15/2016

Improving Variational Inference with Inverse Autoregressive Flow

The framework of normalizing flows provides a general strategy for flexi...
research
02/05/2023

Latent Reconstruction-Aware Variational Autoencoder

Variational Autoencoders (VAEs) have become increasingly popular in rece...
research
05/18/2020

HyperVAE: A Minimum Description Length Variational Hyper-Encoding Network

We propose a framework called HyperVAE for encoding distributions of dis...
research
12/21/2019

Latent Variables on Spheres for Sampling and Spherical Inference

Variational inference is a fundamental problem in Variational Auto-Encod...

Please sign up or login with your details

Forgot password? Click here to reset