Causal Networks: Semantics and Expressiveness

03/27/2013
by   Tom S. Verma, et al.
0

Dependency knowledge of the form "x is independent of y once z is known" invariably obeys the four graphoid axioms, examples include probabilistic and database dependencies. Often, such knowledge can be represented efficiently with graphical structures such as undirected graphs and directed acyclic graphs (DAGs). In this paper we show that the graphical criterion called d-separation is a sound rule for reading independencies from any DAG based on a causal input list drawn from a graphoid. The rule may be extended to cover DAGs that represent functional dependencies as well as conditional dependencies.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

10/21/2010

Reading Dependencies from Covariance Graphs

The covariance graph (aka bi-directed graph) of a probability distributi...
06/20/2012

Reading Dependencies from Polytree-Like Bayesian Networks

We present a graphical criterion for reading dependencies from the minim...
02/13/2013

An Alternative Markov Property for Chain Graphs

Graphical Markov models use graphs, either undirected, directed, or mixe...
08/06/2021

Causal Inference Theory with Information Dependency Models

Inferring the potential consequences of an unobserved event is a fundame...
02/13/2013

An Algorithm for Finding Minimum d-Separating Sets in Belief Networks

The criterion commonly used in directed acyclic graphs (dags) for testin...
02/06/2013

Algorithms for Learning Decomposable Models and Chordal Graphs

Decomposable dependency models and their graphical counterparts, i.e., c...
02/16/2010

Convergence of Bayesian Control Rule

Recently, new approaches to adaptive control have sought to reformulate ...