DeepAI

# Regularization of Bayesian shrinkage priors and inference via geometrically / uniformly ergodic Gibbs sampler

Use of continuous shrinkage priors — with a "spike" near zero and heavy-tails towards infinity — is an increasingly popular approach to induce sparsity in parameter estimates. When the parameters are only weakly identified by the likelihood, however, the posterior may end up with tails as heavy as the prior, jeopardizing robustness of inference. A natural solution is to regularize or lighten up the tails of a shrinkage prior beyond a reasonable parameter range. Existing regularization strategies undermine the attractive computational properties of shrinkage priors. On the other hand, our alternative formulation achieves regularization while preserving the essential aspects of the original shrinkage priors. We study theoretical properties of the Gibbs sampler on resulting posterior distributions, with emphasis on convergence rates of the Pólya-Gamma Gibbs sampler for sparse logistic regression. Our analysis shows that the proposed regularization leads to geometric ergodicity under a broad range of global-local shrinkage priors. Essentially, the only requirement is for the the prior π_ local(·) on the local scale λ to satisfy π_ local(0) < ∞. In the case where lim_λ→ 0π_ local(λ) / λ^a < ∞ for a > 0 as in Bayesian bridge priors, we show the sampler to be uniformly ergodic.

• 12 publications
• 32 publications
07/19/2022

### Horseshoe priors for edge-preserving linear Bayesian inversion

In many large-scale inverse problems, such as computed tomography and im...
09/26/2017

### On the Model Shrinkage Effect of Gamma Process Edge Partition Models

The edge partition model (EPM) is a fundamental Bayesian nonparametric m...
04/05/2011

### Generalized double Pareto shrinkage

We propose a generalized double Pareto prior for Bayesian shrinkage esti...
03/03/2019

### Heavy Tailed Horseshoe Priors

Locally adaptive shrinkage in the Bayesian framework is achieved through...
06/11/2020

### Bayesian Eigenvalue Regularization via Cumulative Shrinkage Process

This study proposes a novel hierarchical prior for inferring possibly lo...
05/15/2021

### Shrinkage-based random local clocks with scalable inference

Local clock models propose that the rate of molecular evolution is const...
11/13/2021

### Asymmetric Conjugate Priors for Large Bayesian VARs

Large Bayesian VARs are now widely used in empirical macroeconomics. One...

## Code Repositories

### horseshoe-scale-sampler

Efficient rejection sampler for updating the local scale parameter in Gibbs sampling horseshoe posteriors.