On the Logic of Causal Models

by   Dan Geiger, et al.

This paper explores the role of Directed Acyclic Graphs (DAGs) as a representation of conditional independence relationships. We show that DAGs offer polynomially sound and complete inference mechanisms for inferring conditional independence relationships from a given causal set of such relationships. As a consequence, d-separation, a graphical criterion for identifying independencies in a DAG, is shown to uncover more valid independencies then any other criterion. In addition, we employ the Armstrong property of conditional independence to show that the dependence relationships displayed by a DAG are inherently consistent, i.e. for every DAG D there exists some probability distribution P that embodies all the conditional independencies displayed in D and none other.


page 1

page 3

page 5

page 6

page 7

page 9

page 11

page 12


Identifying Independencies in Causal Graphs with Feedback

We show that the d -separation criterion constitutes a valid test for co...

Nonparametric causal structure learning in high dimensions

The PC and FCI algorithms are popular constraint-based methods for learn...

On the Testable Implications of Causal Models with Hidden Variables

The validity OF a causal model can be tested ONLY IF the model imposes c...

Identifying Independence in Relational Models

The rules of d-separation provide a framework for deriving conditional i...

Recovering Causal Structures from Low-Order Conditional Independencies

One of the common obstacles for learning causal models from data is that...

An Algorithm for Deciding if a Set of Observed Independencies Has a Causal Explanation

In a previous paper [Pearl and Verma, 1991] we presented an algorithm fo...

Causal Networks: Semantics and Expressiveness

Dependency knowledge of the form "x is independent of y once z is known"...