-
Conditionally-additive-noise Models for Structure Learning
Constraint-based structure learning algorithms infer the causal structur...
read it
-
On Causal Discovery with Equal Variance Assumption
Prior work has shown that causal structure can be uniquely identified fr...
read it
-
Conditional Independences and Causal Relations implied by Sets of Equations
Real-world systems are often modelled by sets of equations with exogenou...
read it
-
Identifying confounders using additive noise models
We propose a method for inferring the existence of a latent common cause...
read it
-
Learning Quadratic Variance Function (QVF) DAG models via OverDispersion Scoring (ODS)
Learning DAG or Bayesian network models is an important problem in multi...
read it
-
Causal Autoregressive Flows
Two apparently unrelated fields – normalizing flows and causality – have...
read it
-
Identifiability of an Integer Modular Acyclic Additive Noise Model and its Causal Structure Discovery
The notion of causality is used in many situations dealing with uncertai...
read it
Beware of the Simulated DAG! Varsortability in Additive Noise Models
Additive noise models are a class of causal models in which each variable is defined as a function of its causes plus independent noise. In such models, the ordering of variables by marginal variances may be indicative of the causal order. We introduce varsortability as a measure of agreement between the ordering by marginal variance and the causal order. We show how varsortability dominates the performance of continuous structure learning algorithms on synthetic data. On real-world data, varsortability is an implausible and untestable assumption and we find no indication of high varsortability. We aim to raise awareness that varsortability easily occurs in simulated additive noise models. We provide a baseline method that explicitly exploits varsortability and advocate reporting varsortability in benchmarking data.
READ FULL TEXT
Comments
There are no comments yet.