Neural Variational Inference and Learning in Belief Networks

01/31/2014
by   Andriy Mnih, et al.
0

Highly expressive directed latent variable models, such as sigmoid belief networks, are difficult to train on large datasets because exact inference in them is intractable and none of the approximate inference methods that have been applied to them scale well. We propose a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior. The model and this inference network are trained jointly by maximizing a variational lower bound on the log-likelihood. Although the naive estimator of the inference model gradient is too high-variance to be useful, we make it practical by applying several straightforward model-independent variance reduction techniques. Applying our approach to training sigmoid belief networks and deep autoregressive networks, we show that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2018

Variational Rejection Sampling

Learning latent variable models with stochastic variational inference is...
research
12/20/2013

Auto-Encoding Variational Bayes

How can we perform efficient inference and learning in directed probabil...
research
09/23/2015

Deep Temporal Sigmoid Belief Networks for Sequence Modeling

Deep dynamic generative models are developed to learn sequential depende...
research
11/19/2015

Iterative Refinement of Approximate Posterior for Training Directed Belief Networks

Variational methods that rely on a recognition network to approximate th...
research
09/12/2018

Discretely Relaxing Continuous Variables for tractable Variational Inference

We explore a new research direction in Bayesian variational inference wi...
research
10/26/2019

Implicit Posterior Variational Inference for Deep Gaussian Processes

A multi-layer deep Gaussian process (DGP) model is a hierarchical compos...
research
06/14/2019

Amortized Bethe Free Energy Minimization for Learning MRFs

We propose to learn deep undirected graphical models (i.e., MRFs), with ...

Please sign up or login with your details

Forgot password? Click here to reset