DeepAI AI Chat
Log In Sign Up

Gradient-Based Neural DAG Learning

by   Sébastien Lachapelle, et al.
Université de Montréal

We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while being more global in its search compared to other greedy approaches. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.


page 1

page 2

page 3

page 4


A Graph Autoencoder Approach to Causal Structure Learning

Causal structure learning has been a challenging task in the past decade...

Masked Gradient-Based Causal Structure Learning

Learning causal graphical models based on directed acyclic graphs is an ...

On Some Integrated Approaches to Inference

We present arguments for the formulation of unified approach to differen...

ptype: Probabilistic Type Inference

Type inference refers to the task of inferring the data type of a given ...

Integer Programming for Causal Structure Learning in the Presence of Latent Variables

The problem of finding an ancestral acyclic directed mixed graph (ADMG) ...

Scaling up Greedy Causal Search for Continuous Variables

As standardly implemented in R or the Tetrad program, causal search algo...

DAG Learning on the Permutahedron

We propose a continuous optimization framework for discovering a latent ...