On the Logic of Causal Models

03/27/2013
by   Dan Geiger, et al.
0

This paper explores the role of Directed Acyclic Graphs (DAGs) as a representation of conditional independence relationships. We show that DAGs offer polynomially sound and complete inference mechanisms for inferring conditional independence relationships from a given causal set of such relationships. As a consequence, d-separation, a graphical criterion for identifying independencies in a DAG, is shown to uncover more valid independencies then any other criterion. In addition, we employ the Armstrong property of conditional independence to show that the dependence relationships displayed by a DAG are inherently consistent, i.e. for every DAG D there exists some probability distribution P that embodies all the conditional independencies displayed in D and none other.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 9

page 11

page 12

02/13/2013

Identifying Independencies in Causal Graphs with Feedback

We show that the d -separation criterion constitutes a valid test for co...
06/21/2021

Nonparametric causal structure learning in high dimensions

The PC and FCI algorithms are popular constraint-based methods for learn...
12/12/2012

On the Testable Implications of Causal Models with Hidden Variables

The validity OF a causal model can be tested ONLY IF the model imposes c...
06/15/2012

Identifying Independence in Relational Models

The rules of d-separation provide a framework for deriving conditional i...
10/06/2020

Recovering Causal Structures from Low-Order Conditional Independencies

One of the common obstacles for learning causal models from data is that...
03/13/2013

An Algorithm for Deciding if a Set of Observed Independencies Has a Causal Explanation

In a previous paper [Pearl and Verma, 1991] we presented an algorithm fo...
03/27/2013

Causal Networks: Semantics and Expressiveness

Dependency knowledge of the form "x is independent of y once z is known"...