DeepAI AI Chat
Log In Sign Up

Improved learning of Bayesian networks

by   Tomas Kocka, et al.

The search space of Bayesian Network structures is usually defined as Acyclic Directed Graphs (DAGs) and the search is done by local transformations of DAGs. But the space of Bayesian Networks is ordered by DAG Markov model inclusion and it is natural to consider that a good search policy should take this into account. First attempt to do this (Chickering 1996) was using equivalence classes of DAGs instead of DAGs itself. This approach produces better results but it is significantly slower. We present a compromise between these two approaches. It uses DAGs to search the space in such a way that the ordering by inclusion is taken into account. This is achieved by repetitive usage of local moves within the equivalence class of DAGs. We show that this new approach produces better results than the original DAGs approach without substantial change in time complexity. We present empirical results, within the framework of heuristic search and Markov Chain Monte Carlo, provided through the Alarm dataset.


page 1

page 5

page 6


Learning Equivalence Classes of Bayesian Networks Structures

Approaches to learning Bayesian networks from data typically combine a s...

Searching for Bayesian Network Structures in the Space of Restricted Acyclic Partially Directed Graphs

Although many algorithms have been designed to construct Bayesian networ...

On Local Optima in Learning Bayesian Networks

This paper proposes and evaluates the k-greedy equivalence search algori...

Learning Bayesian Network Equivalence Classes with Ant Colony Optimization

Bayesian networks are a useful tool in the representation of uncertain k...

Towards Scalable Bayesian Learning of Causal DAGs

We give methods for Bayesian inference of directed acyclic graphs, DAGs,...

Exploiting Qualitative Knowledge in the Learning of Conditional Probabilities of Bayesian Networks

Algorithms for learning the conditional probabilities of Bayesian networ...