Identifying Independence in Relational Models

06/15/2012
by   Marc Maier, et al.
0

The rules of d-separation provide a framework for deriving conditional independence facts from model structure. However, this theory only applies to simple directed graphical models. We introduce relational d-separation, a theory for deriving conditional independence in relational models. We provide a sound, complete, and computationally efficient method for relational d-separation, and we present empirical results that demonstrate effectiveness.

READ FULL TEXT
research
09/26/2013

A Sound and Complete Algorithm for Learning Causal Models from Relational Data

The PC algorithm learns maximally oriented causal Bayesian networks. How...
research
05/30/2021

Approximate Implication with d-Separation

The graphical structure of Probabilistic Graphical Models (PGMs) encodes...
research
03/27/2013

On the Logic of Causal Models

This paper explores the role of Directed Acyclic Graphs (DAGs) as a repr...
research
12/05/2019

Towards Robust Relational Causal Discovery

We consider the problem of learning causal relationships from relational...
research
02/22/2022

Relational Causal Models with Cycles:Representation and Reasoning

Causal reasoning in relational domains is fundamental to studying real-w...
research
03/27/2013

Can Evidence Be Combined in the Dempster-Shafer Theory

Dempster's rule of combination has been the most controversial part of t...
research
05/19/2021

Complementary Structure-Learning Neural Networks for Relational Reasoning

The neural mechanisms supporting flexible relational inferences, especia...

Please sign up or login with your details

Forgot password? Click here to reset