DeepAI AI Chat
Log In Sign Up

Incorporating Type II Error Probabilities from Independence Tests into Score-Based Learning of Bayesian Network Structure

by   Eliot Brenner, et al.
NYU college

We give a new consistent scoring function for structure learning of Bayesian networks. In contrast to traditional approaches to score-based structure learning, such as BDeu or MDL, the complexity penalty that we propose is data-dependent and is given by the probability that a conditional independence test correctly shows that an edge cannot exist. What really distinguishes this new scoring function from earlier work is that it has the property of becoming computationally easier to maximize as the amount of data increases. We prove a polynomial sample complexity result, showing that maximizing this score is guaranteed to correctly learn a structure with no false edges and a distribution close to the generating distribution, whenever there exists a Bayesian network which is a perfect map for the data generating distribution. Although the new score can be used with any search algorithm, in our related UAI 2013 paper [BS13], we have given empirical results showing that it is particularly effective when used together with a linear programming relaxation approach to Bayesian network structure learning. The present paper contains all details of the proofs of the finite-sample complexity results in [BS13] as well as detailed explanation of the computation of the certain error probabilities called beta-values, whose precomputation and tabulation is necessary for the implementation of the algorithm in [BS13].


page 1

page 2

page 3

page 4


SparsityBoost: A New Scoring Function for Learning Bayesian Network Structure

We give a new consistent scoring function for structure learning of Baye...

On the Sample Complexity of Learning Bayesian Networks

In recent years there has been an increasing interest in learning Bayesi...

Active Structure Learning of Bayesian Networks in an Observational Setting

We study active structure learning of Bayesian networks in an observatio...

A Comprehensively Improved Hybrid Algorithm for Learning Bayesian Networks: Multiple Compound Memory Erasing

Using a Bayesian network to analyze the causal relationship between node...

On Sensitivity of the MAP Bayesian Network Structure to the Equivalent Sample Size Parameter

BDeu marginal likelihood score is a popular model selection criterion fo...

Learning Belief Networks in Domains with Recursively Embedded Pseudo Independent Submodels

A pseudo independent (PI) model is a probabilistic domain model (PDM) wh...