Learning from Pairwise Marginal Independencies

08/02/2015
by   Johannes Textor, et al.
0

We consider graphs that represent pairwise marginal independencies amongst a set of variables (for instance, the zero entries of a covariance matrix for normal data). We characterize the directed acyclic graphs (DAGs) that faithfully explain a given set of independencies, and derive algorithms to efficiently enumerate such structures. Our results map out the space of faithful causal models for a given set of pairwise marginal independence relations. This allows us to show the extent to which causal inference is possible without using conditional independence tests.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2020

Recovering Causal Structures from Low-Order Conditional Independencies

One of the common obstacles for learning causal models from data is that...
research
02/02/2022

Causal Inference Through the Structural Causal Marginal Problem

We introduce an approach to counterfactual inference based on merging in...
research
07/04/2012

Towards Characterizing Markov Equivalence Classes for Directed Acyclic Graphs with Latent Variables

It is well known that there may be many causal explanations that are con...
research
12/10/2013

Every LWF and AMP chain graph originates from a set of causal models

This paper aims at justifying LWF and AMP chain graphs by showing that t...
research
10/03/2022

Combinatorial and algebraic perspectives on the marginal independence structure of Bayesian networks

We consider the problem of estimating the marginal independence structur...
research
03/27/2013

Structuring Causal Tree Models with Continuous Variables

This paper considers the problem of invoking auxiliary, unobservable var...
research
09/23/2021

Temporal Inference with Finite Factored Sets

We propose a new approach to temporal inference, inspired by the Pearlia...

Please sign up or login with your details

Forgot password? Click here to reset