A Sound and Complete Algorithm for Learning Causal Models from Relational Data

by   Marc Maier, et al.

The PC algorithm learns maximally oriented causal Bayesian networks. However, there is no equivalent complete algorithm for learning the structure of relational models, a more expressive generalization of Bayesian networks. Recent developments in the theory and representation of relational models support lifted reasoning about conditional independence. This enables a powerful constraint for orienting bivariate dependencies and forms the basis of a new algorithm for learning structure. We present the relational causal discovery (RCD) algorithm that learns causal relational models. We prove that RCD is sound and complete, and we present empirical results that demonstrate effectiveness.


Identifying Independence in Relational Models

The rules of d-separation provide a framework for deriving conditional i...

Towards Robust Relational Causal Discovery

We consider the problem of learning causal relationships from relational...

Relational Causal Models with Cycles:Representation and Reasoning

Causal reasoning in relational domains is fundamental to studying real-w...

Learning Sparse Causal Models is not NP-hard

This paper shows that causal model discovery is not an NP-hard problem, ...

Lifted Representation of Relational Causal Models Revisited: Implications for Reasoning and Structure Learning

Maier et al. (2010) introduced the relational causal model (RCM) for rep...

Amortized learning of neural causal representations

Causal models can compactly and efficiently encode the data-generating p...

Probabilistic Relational Model Benchmark Generation

The validation of any database mining methodology goes through an evalua...