Learning Graphical Model Parameters with Approximate Marginal Inference

01/15/2013
by   Justin Domke, et al.
0

Likelihood based-learning of graphical models faces challenges of computational-complexity and robustness to model mis-specification. This paper studies methods that fit parameters directly to maximize a measure of the accuracy of predicted marginals, taking into account both model and inference approximations at training time. Experiments on imaging problems suggest marginalization-based learning performs better than likelihood-based approximations on difficult problems where the model being fit is approximate in nature.

READ FULL TEXT

page 11

page 12

page 13

page 20

page 21

page 22

research
02/25/2016

Learning Gaussian Graphical Models With Fractional Marginal Pseudo-likelihood

We propose a Bayesian approximate inference method for learning the depe...
research
05/14/2019

NGO-GM: Natural Gradient Optimization for Graphical Models

This paper deals with estimating model parameters in graphical models. W...
research
06/18/2012

Nonparametric variational inference

Variational methods are widely used for approximate posterior inference....
research
11/09/2018

Block Belief Propagation for Parameter Learning in Markov Random Fields

Traditional learning methods for training Markov random fields require d...
research
09/25/2014

Revisiting Algebra and Complexity of Inference in Graphical Models

This paper studies the form and complexity of inference in graphical mod...
research
05/09/2012

Convexifying the Bethe Free Energy

The introduction of loopy belief propagation (LBP) revitalized the appli...
research
02/14/2012

What Cannot be Learned with Bethe Approximations

We address the problem of learning the parameters in graphical models wh...

Please sign up or login with your details

Forgot password? Click here to reset