DeepAI
Log In Sign Up

Quasi-symplectic Langevin Variational Autoencoder

09/02/2020
by   Zihao Wang, et al.
0

Variational autoencoder (VAE) as one of the well investigated generative model is very popular in nowadays neural learning research works. To leverage VAE in practical tasks which have high dimensions and huge dataset often face the problem of low variance evidence lower bounds construction. Markov chain Monte Carlo (MCMC) is an effective approach to tight the evidence lower bound (ELBO) for approximating the posterior distribution. Hamiltonian Variational Autoencoder (HVAE) is one of the effective MCMC inspired approaches for constructing the unbiased low-variance ELBO which is also amenable for reparameterization trick. The solution significantly improves the performance of the posterior estimation effectiveness, yet, a main drawback of HVAE is the leapfrog method need to access the posterior gradient twice which leads to bad inference efficiency performance and the GPU memory requirement is fair large. This flaw limited the application of Hamiltonian based inference framework for large scale networks inference. To tackle this problem, we propose a Quasi-symplectic Langevin Variational autoencoder (Langevin-VAE), which can be a significant improvement over resource usage efficiency. We qualitatively and quantitatively demonstrate the effectiveness of the Langevin-VAE compared to the state-of-art gradients informed inference framework.

READ FULL TEXT
07/21/2019

Tutorial: Deriving the Standard Variational Autoencoder (VAE) Loss Function

In Bayesian machine learning, the posterior distribution is typically co...
05/29/2018

Hamiltonian Variational Auto-Encoder

Variational Auto-Encoders (VAEs) have become very popular techniques to ...
07/10/2020

Self-Reflective Variational Autoencoder

The Variational Autoencoder (VAE) is a powerful framework for learning p...
05/23/2018

Amortized Inference Regularization

The variational autoencoder (VAE) is a popular model for density estimat...
11/25/2022

Toward Unlimited Self-Learning Monte Carlo with Annealing Process Using VAE's Implicit Isometricity

Self-learning Monte Carlo (SLMC) methods are recently proposed to accele...
05/20/2018

Conditional Inference in Pre-trained Variational Autoencoders via Cross-coding

Variational Autoencoders (VAEs) are a popular generative model, but one ...
07/08/2021

MCMC Variational Inference via Uncorrected Hamiltonian Annealing

Given an unnormalized target distribution we want to obtain approximate ...