Order-based Structure Learning without Score Equivalence

02/10/2022
by   Hyunwoong Chang, et al.
0

We consider the structure learning problem with all node variables having the same error variance, an assumption known to ensure the identifiability of the causal directed acyclic graph (DAG). We propose an empirical Bayes formulation of the problem that yields a non-decomposable posterior score for DAG models. To facilitate efficient posterior computation, we approximate the posterior probability of each ordering by that of a "best" DAG model, which naturally leads to an order-based Markov chain Monte Carlo (MCMC) algorithm. Strong selection consistency for our model is proved under mild high-dimensional conditions, and the mixing behavior of our sampler is theoretically investigated. Further, we propose a new iterative top-down algorithm, which quickly yields an approximate solution to the structure learning problem and can be used to initialize the MCMC sampler. We demonstrate that our method outperforms other state-of-the-art algorithms under various simulation settings, and conclude the paper with a single-cell real-data study illustrating practical advantages of the proposed method.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

08/17/2017

Pseudo-extended Markov chain Monte Carlo

Sampling from the posterior distribution using Markov chain Monte Carlo ...
11/26/2012

Bayesian learning of noisy Markov decision processes

We consider the inverse reinforcement learning problem, that is, the pro...
03/15/2018

Minimal I-MAP MCMC for Scalable Structure Discovery in Causal DAG Models

Learning a Bayesian network (BN) from data can be useful for decision-ma...
10/28/2020

Ensemble sampler for infinite-dimensional inverse problems

We introduce a new Markov chain Monte Carlo (MCMC) sampler for infinite-...
02/11/2022

Inference and FDR Control for Simulated Ising Models in High-dimension

This paper studies the consistency and statistical inference of simulate...
01/11/2021

Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes

We consider MCMC methods for learning equivalence classes of sparse Gaus...
06/06/2018

Regenerative Simulation for the Bayesian Lasso

The Gibbs sampler of Park and Casella is one of the most popular MCMC me...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.