DeepAI AI Chat
Log In Sign Up

Learning Undirected Posteriors by Backpropagation through MCMC Updates

by   Arash Vahdat, et al.

The representation of the posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machine posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models as posteriors.


page 1

page 2

page 3

page 4


Bayesian Learning in Undirected Graphical Models: Approximate MCMC algorithms

Bayesian learning in undirected graphical models|computing posterior dis...

Pen and Paper Exercises in Machine Learning

This is a collection of (mostly) pen-and-paper exercises in machine lear...

Stratified Graphical Models - Context-Specific Independence in Graphical Models

Theory of graphical models has matured over more than three decades to p...

The Minimax Learning Rate of Normal and Ising Undirected Graphical Models

Let G be an undirected graph with m edges and d vertices. We show that d...

Iterative Refinement of Approximate Posterior for Training Directed Belief Networks

Variational methods that rely on a recognition network to approximate th...

Bayesian Parameter Estimation for Latent Markov Random Fields and Social Networks

Undirected graphical models are widely used in statistics, physics and m...

Variational Cumulant Expansions for Intractable Distributions

Intractable distributions present a common difficulty in inference withi...