DeepAI AI Chat
Log In Sign Up

Learning Sparse Nonparametric DAGs

by   Xun Zheng, et al.
Carnegie Mellon University
The University of Chicago Booth School of Business

We develop a framework for learning sparse nonparametric directed acyclic graphs (DAGs) from data. Our approach is based on a recent algebraic characterization of DAGs that led to the first fully continuous optimization for score-based learning of DAG models parametrized by a linear structural equation model (SEM). We extend this algebraic characterization to nonparametric SEM by leveraging nonparametric sparsity based on partial derivatives, resulting in a continuous optimization problem that can be applied to a variety of nonparametric and semiparametric models including GLMs, additive noise models, and index models as special cases. We also explore the use of neural networks and orthogonal basis expansions to model nonlinearities for general nonparametric models. Extensive empirical study confirms the necessity of nonlinear dependency and the advantage of continuous optimization for score-based learning.


page 1

page 2

page 3

page 4


Priors on exchangeable directed graphs

Directed graphs occur throughout statistical modeling of networks, and e...

Nonparametric Basis Pursuit via Sparse Kernel-based Learning

Signal processing tasks as fundamental as sampling, reconstruction, mini...

Nonparametric sparsity and regularization

In this work we are interested in the problems of supervised learning an...

Score-based Causal Learning in Additive Noise Models

Given data sampled from a number of variables, one is often interested i...

Inference on function-valued parameters using a restricted score test

It is often of interest to make inference on an unknown function that is...

The Indian Chefs Process

This paper introduces the Indian Chefs Process (ICP), a Bayesian nonpara...

On the Role of Sparsity and DAG Constraints for Learning Linear DAGs

Learning graphical structure based on Directed Acyclic Graphs (DAGs) is ...