Marginal Pseudo-Likelihood Learning of Markov Network structures

01/20/2014
by   Johan Pensar, et al.
0

Undirected graphical models known as Markov networks are popular for a wide variety of applications ranging from statistical physics to computational biology. Traditionally, learning of the network structure has been done under the assumption of chordality which ensures that efficient scoring methods can be used. In general, non-chordal graphs have intractable normalizing constants which renders the calculation of Bayesian and other scores difficult beyond very small-scale systems. Recently, there has been a surge of interest towards the use of regularized pseudo-likelihood methods for structural learning of large-scale Markov network models, as such an approach avoids the assumption of chordality. The currently available methods typically necessitate the use of a tuning parameter to adapt the level of regularization for a particular dataset, which can be optimized for example by cross-validation. Here we introduce a Bayesian version of pseudo-likelihood scoring of Markov networks, which enables an automatic regularization through marginalization over the nuisance parameters in the model. We prove consistency of the resulting MPL estimator for the network structure via comparison with the pseudo information criterion. Identification of the MPL-optimal network on a prescanned graph space is considered with both greedy hill climbing and exact pseudo-Boolean optimization algorithms. We find that for reasonable sample sizes the hill climbing approach most often identifies networks that are at a negligible distance from the restricted global optimum. Using synthetic and existing benchmark networks, the marginal pseudo-likelihood method is shown to generally perform favorably against recent popular inference methods for Markov networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2016

Learning Gaussian Graphical Models With Fractional Marginal Pseudo-likelihood

We propose a Bayesian approximate inference method for learning the depe...
research
03/29/2021

Structure Learning of Contextual Markov Networks using Marginal Pseudo-likelihood

Markov networks are popular models for discrete multivariate systems whe...
research
06/30/2023

High-Dimensional Bayesian Structure Learning in Gaussian Graphical Models using Marginal Pseudo-Likelihood

Gaussian graphical models depict the conditional dependencies between va...
research
01/14/2019

High-dimensional structure learning of binary pairwise Markov networks: A comparative numerical study

Learning the undirected graph structure of a Markov network from data is...
research
04/25/2013

Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation

We propose a new method for detecting changes in Markov network structur...
research
03/22/2021

Partitioned hybrid learning of Bayesian network structures

We develop a novel hybrid method for Bayesian network structure learning...
research
05/28/2021

Pseudo-marginal Inference for CTMCs on Infinite Spaces via Monotonic Likelihood Approximations

Bayesian inference for Continuous-Time Markov Chains (CTMCs) on countabl...

Please sign up or login with your details

Forgot password? Click here to reset