Causal Networks: Semantics and Expressiveness

by   Tom S. Verma, et al.

Dependency knowledge of the form "x is independent of y once z is known" invariably obeys the four graphoid axioms, examples include probabilistic and database dependencies. Often, such knowledge can be represented efficiently with graphical structures such as undirected graphs and directed acyclic graphs (DAGs). In this paper we show that the graphical criterion called d-separation is a sound rule for reading independencies from any DAG based on a causal input list drawn from a graphoid. The rule may be extended to cover DAGs that represent functional dependencies as well as conditional dependencies.


page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8


Reading Dependencies from Covariance Graphs

The covariance graph (aka bi-directed graph) of a probability distributi...

Reading Dependencies from Polytree-Like Bayesian Networks

We present a graphical criterion for reading dependencies from the minim...

An Alternative Markov Property for Chain Graphs

Graphical Markov models use graphs, either undirected, directed, or mixe...

Causal Inference Theory with Information Dependency Models

Inferring the potential consequences of an unobserved event is a fundame...

An Algorithm for Finding Minimum d-Separating Sets in Belief Networks

The criterion commonly used in directed acyclic graphs (dags) for testin...

Algorithms for Learning Decomposable Models and Chordal Graphs

Decomposable dependency models and their graphical counterparts, i.e., c...

Convergence of Bayesian Control Rule

Recently, new approaches to adaptive control have sought to reformulate ...