Approximation by Quantization

02/14/2012
by   Vibhav Gogate, et al.
0

Inference in graphical models consists of repeatedly multiplying and summing out potentials. It is generally intractable because the derived potentials obtained in this way can be exponentially large. Approximate inference techniques such as belief propagation and variational methods combat this by simplifying the derived potentials, typically by dropping variables from them. We propose an alternate method for simplifying potentials: quantizing their values. Quantization causes different states of a potential to have the same value, and therefore introduces context-specific independencies that can be exploited to represent the potential more compactly. We use algebraic decision diagrams (ADDs) to do this efficiently. We apply quantization and ADD reduction to variable elimination and junction tree propagation, yielding a family of bounded approximate inference schemes. Our experimental tests show that our new schemes significantly outperform state-of-the-art approaches on many benchmark instances.

READ FULL TEXT
research
06/29/2015

Exact and approximate inference in graphical models: variable elimination and beyond

Probabilistic graphical models offer a powerful framework to account for...
research
10/16/2012

Belief Propagation for Structured Decision Making

Variational inference algorithms such as belief propagation have had tre...
research
07/12/2012

Hybrid Influence Diagrams Using Mixtures of Truncated Exponentials

Mixtures of truncated exponentials (MTE) potentials are an alternative t...
research
03/14/2018

Bucket Renormalization for Approximate Inference

Probabilistic graphical models are a key tool in machine learning applic...
research
12/12/2012

Exploiting Functional Dependence in Bayesian Network Inference

We propose an efficient method for Bayesian network inference in models ...
research
04/06/2017

Higher-Order Minimum Cost Lifted Multicuts for Motion Segmentation

Most state-of-the-art motion segmentation algorithms draw their potentia...
research
06/14/2019

Amortized Bethe Free Energy Minimization for Learning MRFs

We propose to learn deep undirected graphical models (i.e., MRFs), with ...

Please sign up or login with your details

Forgot password? Click here to reset