Minimax Posterior Convergence Rates and Model Selection Consistency in High-dimensional DAG Models based on Sparse Cholesky Factors

11/15/2018
by   Kyoungjae Lee, et al.
0

In this paper, we study the high-dimensional sparse directed acyclic graph (DAG) models under the empirical sparse Cholesky prior. Among our results, strong model selection consistency or graph selection consistency is obtained under more general conditions than those in the existing literature. Compared to Cao, Khare and Ghosh (2017), the required conditions are weakened in terms of the dimensionality, sparsity and lower bound of the nonzero elements in the Cholesky factor. Furthermore, our result does not require the irrepresentable condition, which is necessary for Lasso type methods. We also derive the posterior convergence rates for precision matrices and Cholesky factors with respect to various matrix norms. The obtained posterior convergence rates are the fastest among those of the existing Bayesian approaches. In particular, we prove that our posterior convergence rates for Cholesky factors are the minimax or at least nearly minimax depending on the relative size of true sparseness for the entire dimension. The simulation study confirms that the proposed method outperforms the competing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2018

Bayesian Test and Selection for Bandwidth of High-dimensional Banded Precision Matrices

Assuming a banded structure is one of the common practice in the estimat...
research
04/17/2020

Bayesian inference for high-dimensional decomposable graphs

In this paper, we consider high-dimensional Gaussian graphical models wh...
research
12/07/2017

Convergence Rates of Variational Posterior Distributions

We study convergence rates of variational posterior distributions for no...
research
08/30/2020

Bayesian High-dimensional Semi-parametric Inference beyond sub-Gaussian Errors

We consider a sparse linear regression model with unknown symmetric erro...
research
06/11/2018

A framework for posterior consistency in model selection

We develop a theoretical framework for the frequentist assessment of Bay...
research
10/14/2022

Early stopping for L^2-boosting in high-dimensional linear models

Increasingly high-dimensional data sets require that estimation methods ...
research
03/12/2020

Posterior asymptotics in Wasserstein metrics on the real line

In this paper, we use the class of Wasserstein metrics to study asymptot...

Please sign up or login with your details

Forgot password? Click here to reset