Identifying Independence in Relational Models

06/15/2012
by   Marc Maier, et al.
0

The rules of d-separation provide a framework for deriving conditional independence facts from model structure. However, this theory only applies to simple directed graphical models. We introduce relational d-separation, a theory for deriving conditional independence in relational models. We provide a sound, complete, and computationally efficient method for relational d-separation, and we present empirical results that demonstrate effectiveness.

READ FULL TEXT
09/26/2013

A Sound and Complete Algorithm for Learning Causal Models from Relational Data

The PC algorithm learns maximally oriented causal Bayesian networks. How...
05/30/2021

Approximate Implication with d-Separation

The graphical structure of Probabilistic Graphical Models (PGMs) encodes...
03/27/2013

On the Logic of Causal Models

This paper explores the role of Directed Acyclic Graphs (DAGs) as a repr...
08/06/2021

Topological Conditional Separation

Pearl's d-separation is a foundational notion to study conditional indep...
12/05/2019

Towards Robust Relational Causal Discovery

We consider the problem of learning causal relationships from relational...
05/19/2021

Complementary Structure-Learning Neural Networks for Relational Reasoning

The neural mechanisms supporting flexible relational inferences, especia...
02/22/2022

Relational Causal Models with Cycles:Representation and Reasoning

Causal reasoning in relational domains is fundamental to studying real-w...