DeepAI AI Chat
Log In Sign Up

An Empirical-Bayes Score for Discrete Bayesian Networks

05/12/2016
by   Marco Scutari, et al.
University of Oxford
0

Bayesian network structure learning is often performed in a Bayesian setting, by evaluating candidate structures using their posterior probabilities for a given data set. Score-based algorithms then use those posterior probabilities as an objective function and return the maximum a posteriori network as the learned model. For discrete Bayesian networks, the canonical choice for a posterior score is the Bayesian Dirichlet equivalent uniform (BDeu) marginal likelihood with a uniform (U) graph prior (Heckerman et al., 1995). Its favourable theoretical properties descend from assuming a uniform prior both on the space of the network structures and on the space of the parameters of the network. In this paper, we revisit the limitations of these assumptions; and we introduce an alternative set of assumptions and the resulting score: the Bayesian Dirichlet sparse (BDs) empirical Bayes marginal likelihood with a marginal uniform (MU) graph prior. We evaluate its performance in an extensive simulation study, showing that MU+BDs is more accurate than U+BDeu both in learning the structure of the network and in predicting new observations, while not being computationally more complex to estimate.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/12/2017

Beyond Uniform Priors in Bayesian Network Structure Learning

Bayesian network structure learning is often performed in a Bayesian set...
08/02/2017

Dirichlet Bayesian Network Scores and the Maximum Entropy Principle

A classic approach for learning Bayesian networks from data is to select...
08/04/2020

Structure Learning from Related Data Sets with a Hierarchical Bayesian Score

Score functions for learning the structure of Bayesian networks in the l...
05/25/2022

Removing the fat from your posterior samples with margarine

Bayesian workflows often require the introduction of nuisance parameters...
03/31/2020

Exact marginal inference in Latent Dirichlet Allocation

Assume we have potential "causes" z∈ Z, which produce "events" w with kn...
03/15/2012

Learning networks determined by the ratio of prior and data

Recent reports have described that the equivalent sample size (ESS) in a...