
Proof Supplement  Learning Sparse Causal Models is not NPhard (UAI2013)
This article contains detailed proofs and additional examples related to...
read it

A Sound and Complete Algorithm for Learning Causal Models from Relational Data
The PC algorithm learns maximally oriented causal Bayesian networks. How...
read it

Efficient Permutation Discovery in Causal DAGs
The problem of learning a directed acyclic graph (DAG) up to Markov equi...
read it

LargeSample Learning of Bayesian Networks is NPHard
In this paper, we provide new complexity results for algorithms that lea...
read it

A Robust Advantaged Node Placement Strategy for Sparse Network Graphs
Establishing robust connectivity in heterogeneous networks (HetNets) is ...
read it

Efficient Markov Network Structure Discovery Using Independence Tests
We present two algorithms for learning the structure of a Markov network...
read it

Empirical Evaluation of Real World Tournaments
Computational Social Choice (ComSoc) is a rapidly developing field at th...
read it
Learning Sparse Causal Models is not NPhard
This paper shows that causal model discovery is not an NPhard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N^2(k+2) independence tests, even when latent variables and selection bias may be present. We present a modification of the wellknown FCI algorithm that implements the method for an independence oracle, and suggest improvements for sample/realworld data versions. It does not contradict any known hardness results, and does not solve an NPhard problem: it just proves that sparse causal discovery is perhaps more complicated, but not as hard as learning minimal Bayesian networks.
READ FULL TEXT
Comments
There are no comments yet.