Amortized Bethe Free Energy Minimization for Learning MRFs

06/14/2019
by   Sam Wiseman, et al.
0

We propose to learn deep undirected graphical models (i.e., MRFs), with a non-ELBO objective for which we can calculate exact gradients. In particular, we optimize a saddle-point objective deriving from the Bethe free energy approximation to the partition function. Unlike much recent work in approximate inference, the derived objective requires no sampling, and can be efficiently computed even for very expressive MRFs. We furthermore amortize this optimization with trained inference networks. Experimentally, we find that the proposed approach compares favorably with loopy belief propagation, but is faster, and it allows for attaining better held out log likelihood than other recent approximate inference schemes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2017

Neural Variational Inference and Learning in Undirected Graphical Models

Many problems in machine learning are naturally expressed in the languag...
research
05/09/2012

Convexifying the Bethe Free Energy

The introduction of loopy belief propagation (LBP) revitalized the appli...
research
06/17/2020

Region-based Energy Neural Network for Approximate Inference

Region-based free energy was originally proposed for generalized belief ...
research
10/19/2012

Approximate Inference and Constrained Optimization

Loopy and generalized belief propagation are popular algorithms for appr...
research
01/31/2014

Neural Variational Inference and Learning in Belief Networks

Highly expressive directed latent variable models, such as sigmoid belie...
research
03/23/2022

Approximate Inference for Stochastic Planning in Factored Spaces

Stochastic planning can be reduced to probabilistic inference in large d...
research
02/14/2012

Approximation by Quantization

Inference in graphical models consists of repeatedly multiplying and sum...

Please sign up or login with your details

Forgot password? Click here to reset