
Expectation Propogation for approximate inference in dynamic Bayesian networks
We describe expectation propagation for approximate inference in dynamic...
read it

Convexifying the Bethe Free Energy
The introduction of loopy belief propagation (LBP) revitalized the appli...
read it

The DLR Hierarchy of Approximate Inference
We propose a hierarchy for approximate inference based on the Dobrushin,...
read it

Regionbased Energy Neural Network for Approximate Inference
Regionbased free energy was originally proposed for generalized belief ...
read it

Amortized Bethe Free Energy Minimization for Learning MRFs
We propose to learn deep undirected graphical models (i.e., MRFs), with ...
read it

A New Class of Upper Bounds on the Log Partition Function
Bounds on the log partition function are important in a variety of conte...
read it

Negative Tree Reweighted Belief Propagation
We introduce a new class of lower bounds on the log partition function o...
read it
Approximate Inference and Constrained Optimization
Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy. However, belief propagation does not always converge, which explains the need for approaches that explicitly minimize the Kikuchi/Bethe free energy, such as CCCP and UPS. Here we describe a class of algorithms that solves this typically nonconvex constrained minimization of the Kikuchi free energy through a sequence of convex constrained minimizations of upper bounds on the Kikuchi free energy. Intuitively one would expect tighter bounds to lead to faster algorithms, which is indeed convincingly demonstrated in our simulations. Several ideas are applied to obtain tight convex bounds that yield dramatic speedups over CCCP.
READ FULL TEXT
Comments
There are no comments yet.