Variable Selection Consistency of Gaussian Process Regression

12/12/2019
by   Sheng Jiang, et al.
0

Bayesian nonparametric regression under a rescaled Gaussian process prior offers smoothness-adaptive function estimation with near minimax-optimal error rates. Hierarchical extensions of this approach, equipped with stochastic variable selection, are known to also adapt to the unknown intrinsic dimension of a sparse true regression function. But it remains unclear if such extensions offer variable selection consistency, i.e., if the true subset of important variables could be consistently learned from the data. It is shown here that variable consistency may indeed be achieved with such models at least when the true regression function has finite smoothness to induce a polynomially larger penalty on inclusion of false positive predictors. Our result covers the high dimensional asymptotic setting where the predictor dimension is allowed to grow with the sample size. The proof utilizes Schwartz theory to establish that the posterior probability of wrong selection vanishes asymptotically. A necessary and challenging technical development involves providing sharp upper and lower bounds to small ball probabilities at all rescaling levels of the Gaussian process prior, a result that could be of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2011

Dimension adaptability of Gaussian process models with variable selection and projection

It is now known that an extended Gaussian process model equipped with re...
research
09/08/2023

Generalized Variable Selection Algorithms for Gaussian Process Models by LASSO-like Penalty

With the rapid development of modern technology, massive amounts of data...
research
10/23/2018

Asymptotic Theory of Bayes Factor for Nonparametric Model and Variable Selection in the Gaussian Process Framework

In this paper we consider the Bayes factor approach to general model and...
research
05/16/2022

On the inability of Gaussian process regression to optimally learn compositional functions

We rigorously prove that deep Gaussian process priors can outperform Gau...
research
10/28/2017

Cox's proportional hazards model with a high-dimensional and sparse regression parameter

This paper deals with the proportional hazards model proposed by D. R. C...
research
02/25/2021

Consistent Sparse Deep Learning: Theory and Computation

Deep learning has been the engine powering many successes of data scienc...
research
12/30/2021

Variable selection, monotone likelihood ratio and group sparsity

In the pivotal variable selection problem, we derive the exact non-asymp...

Please sign up or login with your details

Forgot password? Click here to reset