Adaptive Bayesian Shrinkage Estimation Using Log-Scale Shrinkage Priors

01/08/2018
by   Daniel F. Schmidt, et al.
0

Global-local shrinkage hierarchies are an important, recent innovation in Bayesian estimation of regression models. In this paper we propose to use log-scale distributions as a basis for generating familes of flexible prior distributions for the local shrinkage hyperparameters within such hierarchies. An important property of the log-scale priors is that by varying the scale parameter one may vary the degree to which the prior distribution promotes sparsity in the coefficient estimates, all the way from the simple proportional shrinkage ridge regression model up to extremely heavy tailed, sparsity inducing prior distributions. By examining the class of distributions over the logarithm of the local shrinkage parameter that have log-linear, or sub-log-linear tails, we show that many of standard prior distributions for local shrinkage parameters can be unified in terms of the tail behaviour and concentration properties of their corresponding marginal distributions over the coefficients β_j. We use these results to derive upper bounds on the rate of concentration around |β_j|=0, and the tail decay as |β_j| →∞, achievable by this class of prior distributions. We then propose a new type of ultra-heavy tailed prior, called the log-t prior, which exhibits the property that, irrespective of the choice of associated scale parameter, the induced marginal distribution over β_j always diverge at β_j = 0, and always possesses super-Cauchy tails. Finally, we propose to incorporate the scale parameter in the log-scale prior distributions into the Bayesian hierarchy and derive an adaptive shrinkage procedure. Simulations show that in contrast to a number of standard prior distributions, our adaptive log-t procedure appears to always perform well, irrespective of the level of sparsity or signal-to-noise ratio of the underlying model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2019

Heavy Tailed Horseshoe Priors

Locally adaptive shrinkage in the Bayesian framework is achieved through...
research
08/15/2022

Intuitive Joint Priors for Bayesian Linear Multilevel Models: The R2D2M2 prior

The training of high-dimensional regression models on comparably sparse ...
research
01/23/2020

Shrinkage with Robustness: Log-Adjusted Priors for Sparse Signals

We introduce a new class of distributions named log-adjusted shrinkage p...
research
03/30/2018

Large Multi-scale Spatial Kriging Using Tree Shrinkage Priors

We develop a multiscale spatial kernel convolution technique with higher...
research
02/21/2021

Group Inverse-Gamma Gamma Shrinkage for Sparse Regression with Block-Correlated Predictors

Heavy-tailed continuous shrinkage priors, such as the horseshoe prior, a...
research
05/06/2020

Log-Regularly Varying Scale Mixture of Normals for Robust Regression

Linear regression with the classical normality assumption for the error ...
research
03/15/2023

Translating predictive distributions into informative priors

When complex Bayesian models exhibit implausible behaviour, one solution...

Please sign up or login with your details

Forgot password? Click here to reset