Log In Sign Up

Generalized double Pareto shrinkage

by   Artin Armagan, et al.
Duke University

We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferences in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, forming a bridge between the Laplace and Normal-Jeffreys' priors. While it has a spike at zero like the Laplace density, it also has a Student's t-like tail behavior. Bayesian computation is straightforward via a simple Gibbs sampling algorithm. We investigate the properties of the maximum a posteriori estimator, as sparse estimation plays an important role in many problems, reveal connections with some well-established regularization procedures, and show some asymptotic results. The performance of the prior is tested through simulations and an application.


page 1

page 2

page 3

page 4


Regularization of Bayesian shrinkage priors and inference via geometrically / uniformly ergodic Gibbs sampler

Use of continuous shrinkage priors — with a "spike" near zero and heavy-...

Bayesian Fusion Estimation via t-Shrinkage

Shrinkage prior has gained great successes in many data analysis, howeve...

Intuitive Joint Priors for Bayesian Linear Multilevel Models: The R2D2M2 prior

The training of high-dimensional regression models on comparably sparse ...

A Laplace Mixture Representation of the Horseshoe and Some Implications

The horseshoe prior, defined as a half Cauchy scale mixture of normal, p...

On parameters transformations for emulating sparse priors using variational-Laplace inference

So-called sparse estimators arise in the context of model fitting, when ...

Shrinkage with Robustness: Log-Adjusted Priors for Sparse Signals

We introduce a new class of distributions named log-adjusted shrinkage p...

Beta Rank Function: A Smooth Double-Pareto-Like Distribution

The Beta Rank Function (BRF) x(u) =A(1-u)^b/u^a, where u is the normaliz...