Gradient-Based Neural DAG Learning

06/05/2019
by   Sébastien Lachapelle, et al.
3

We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while being more global in its search compared to other greedy approaches. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2019

A Graph Autoencoder Approach to Causal Structure Learning

Causal structure learning has been a challenging task in the past decade...
research
10/18/2019

Masked Gradient-Based Causal Structure Learning

Learning causal graphical models based on directed acyclic graphs is an ...
research
12/05/2012

On Some Integrated Approaches to Inference

We present arguments for the formulation of unified approach to differen...
research
11/22/2019

ptype: Probabilistic Type Inference

Type inference refers to the task of inferring the data type of a given ...
research
02/05/2021

Integer Programming for Causal Structure Learning in the Presence of Latent Variables

The problem of finding an ancestral acyclic directed mixed graph (ADMG) ...
research
07/28/2015

Scaling up Greedy Causal Search for Continuous Variables

As standardly implemented in R or the Tetrad program, causal search algo...
research
01/27/2023

DAG Learning on the Permutahedron

We propose a continuous optimization framework for discovering a latent ...

Please sign up or login with your details

Forgot password? Click here to reset