A framework for posterior consistency in model selection

06/11/2018
by   David Rossell, et al.
0

We develop a theoretical framework for the frequentist assessment of Bayesian model selection, specifically its ability to select the (Kullback-Leibler) optimal model and to portray the corresponding uncertainty. The contribution is not proving consistency for a specific prior, but giving a general strategy for such proofs. Its basis applies to any model, prior, sample size, parameter dimensionality and (although only briefly exploited here) under model misspecification. As an immediate consequence the framework also characterizes a strong form of convergence for L_0 penalties and associated pseudo-posterior probabilities of potential interest for uncertainty quantification. The main advantage of the framework is that, instead of studying complex high-dimensional stochastic sums, it suffices to bound certain Bayes factor tails and use standard tools to determine the convergence of deterministic series. As a second contribution we deploy the framework to canonical linear regression. These findings give a high-level description of when one can achieve consistency and at what rate for a wide class of priors as a function of the data-generating truth, sample size and dimensionality. They also indicate when it is possible to use less sparse priors to improve inherent sparsity vs. power trade-offs that are not adequately captured by studying asymptotically optimal rates. Our empirical illustrations align with these findings, underlining the importance of considering the problem at hand's characteristics to judge the quality of model selection procedures, rather than relying purely on asymptotics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2022

Bayesian inference on hierarchical nonlocal priors in generalized linear models

Variable selection methods with nonlocal priors have been widely studied...
research
02/11/2022

High-dimensional properties for empirical priors in linear regression with unknown error variance

We study full Bayesian procedures for high-dimensional linear regression...
research
11/15/2018

Minimax Posterior Convergence Rates and Model Selection Consistency in High-dimensional DAG Models based on Sparse Cholesky Factors

In this paper, we study the high-dimensional sparse directed acyclic gra...
research
01/02/2017

Bayesian model selection consistency and oracle inequality with intractable marginal likelihood

In this article, we investigate large sample properties of model selecti...
research
08/24/2020

Unified Bayesian asymptotic theory for sparse linear regression

We study frequentist asymptotic properties of Bayesian procedures for hi...
research
12/14/2020

Approximate Laplace approximations for scalable model selection

We propose the approximate Laplace approximation (ALA) to evaluate integ...
research
11/16/2017

Converting P-Values in Adaptive Robust Lower Bounds of Posterior Probabilities to increase the reproducible Scientific "Findings"

We put forward a novel calibration of p values, the "Adaptive Robust Low...

Please sign up or login with your details

Forgot password? Click here to reset