On Local Optima in Learning Bayesian Networks

10/19/2012
by   Jens D. Nielsen, et al.
0

This paper proposes and evaluates the k-greedy equivalence search algorithm (KES) for learning Bayesian networks (BNs) from complete data. The main characteristic of KES is that it allows a trade-off between greediness and randomness, thus exploring different good local optima. When greediness is set at maximum, KES corresponds to the greedy equivalence search algorithm (GES). When greediness is kept at minimum, we prove that under mild assumptions KES asymptotically returns any inclusion optimal BN with nonzero probability. Experimental results for both synthetic and real data are reported showing that KES often finds a better local optima than GES. Moreover, we use KES to experimentally confirm that the number of different local optima is often huge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2013

Learning Equivalence Classes of Bayesian Networks Structures

Approaches to learning Bayesian networks from data typically combine a s...
research
01/15/2014

Learning Bayesian Network Equivalence Classes with Ant Colony Optimization

Bayesian networks are a useful tool in the representation of uncertain k...
research
01/10/2013

Improved learning of Bayesian networks

The search space of Bayesian Network structures is usually defined as Ac...
research
08/21/2018

Greedy Harmony Search Algorithm for the Hop Constrained Connected Facility Location

We present a simple, robust and efficient harmony search algorithm for t...
research
06/06/2015

Selective Greedy Equivalence Search: Finding Optimal Bayesian Networks Using a Polynomial Number of Score Evaluations

We introduce Selective Greedy Equivalence Search (SGES), a restricted ve...
research
03/06/2013

Using Causal Information and Local Measures to Learn Bayesian Networks

In previous work we developed a method of learning Bayesian Network mode...
research
10/18/2020

DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks

This paper re-examines a continuous optimization framework dubbed NOTEAR...

Please sign up or login with your details

Forgot password? Click here to reset