On the Testable Implications of Causal Models with Hidden Variables

12/12/2012
by   Jin Tian, et al.
0

The validity OF a causal model can be tested ONLY IF the model imposes constraints ON the probability distribution that governs the generated data. IN the presence OF unmeasured variables, causal models may impose two types OF constraints : conditional independencies, AS READ through the d - separation criterion, AND functional constraints, FOR which no general criterion IS available.This paper offers a systematic way OF identifying functional constraints AND, thus, facilitates the task OF testing causal models AS well AS inferring such models FROM data.

READ FULL TEXT
research
09/13/2021

Restricted Hidden Cardinality Constraints in Causal Models

Causal models with unobserved variables impose nontrivial constraints on...
research
03/27/2013

On the Logic of Causal Models

This paper explores the role of Directed Acyclic Graphs (DAGs) as a repr...
research
12/08/2022

A probabilistic autoencoder for causal discovery

The paper addresses the problem of finding the causal direction between ...
research
07/12/2022

The d-separation criterion in Categorical Probability

The d-separation criterion detects the compatibility of a joint probabil...
research
02/13/2013

Identifying Independencies in Causal Graphs with Feedback

We show that the d -separation criterion constitutes a valid test for co...
research
05/20/2019

Conditionally-additive-noise Models for Structure Learning

Constraint-based structure learning algorithms infer the causal structur...
research
08/20/2021

Extracting Qualitative Causal Structure with Transformer-Based NLP

Qualitative causal relationships compactly express the direction, depend...

Please sign up or login with your details

Forgot password? Click here to reset