
Identifying Independence in Relational Models
The rules of dseparation provide a framework for deriving conditional i...
read it

Towards Robust Relational Causal Discovery
We consider the problem of learning causal relationships from relational...
read it

Lifted Representation of Relational Causal Models Revisited: Implications for Reasoning and Structure Learning
Maier et al. (2010) introduced the relational causal model (RCM) for rep...
read it

Learning Sparse Causal Models is not NPhard
This paper shows that causal model discovery is not an NPhard problem, ...
read it

The Cognitive Processing of Causal Knowledge
There is a brief description of the probabilistic causal graph model for...
read it

Amortized learning of neural causal representations
Causal models can compactly and efficiently encode the datagenerating p...
read it

Probabilistic Relational Model Benchmark Generation
The validation of any database mining methodology goes through an evalua...
read it
A Sound and Complete Algorithm for Learning Causal Models from Relational Data
The PC algorithm learns maximally oriented causal Bayesian networks. However, there is no equivalent complete algorithm for learning the structure of relational models, a more expressive generalization of Bayesian networks. Recent developments in the theory and representation of relational models support lifted reasoning about conditional independence. This enables a powerful constraint for orienting bivariate dependencies and forms the basis of a new algorithm for learning structure. We present the relational causal discovery (RCD) algorithm that learns causal relational models. We prove that RCD is sound and complete, and we present empirical results that demonstrate effectiveness.
READ FULL TEXT
Comments
There are no comments yet.