DeepAI AI Chat
Log In Sign Up

A Laplace Mixture Representation of the Horseshoe and Some Implications

by   Ksheera Sagar, et al.
Purdue University

The horseshoe prior, defined as a half Cauchy scale mixture of normal, provides a state of the art approach to Bayesian sparse signal recovery. We provide a new representation of the horseshoe density as a scale mixture of the Laplace density, explicitly identifying the mixing measure. Using the celebrated Bernstein–Widder theorem and a result due to Bochner, our representation immediately establishes the complete monotonicity of the horseshoe density and strong concavity of the corresponding penalty. Consequently, the equivalence between local linear approximation and expectation–maximization algorithms for finding the posterior mode under the horseshoe penalized regression is established. Further, the resultant estimate is shown to be sparse.


page 1

page 2

page 3

page 4


Wasserstein convergence in Bayesian deconvolution models

We study the reknown deconvolution problem of recovering a distribution ...

Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem

In this paper, we develop a Bayesian evidence maximization framework to ...

Generalized double Pareto shrinkage

We propose a generalized double Pareto prior for Bayesian shrinkage esti...

Laplace and Saddlepoint Approximations in High Dimensions

We examine the behaviour of the Laplace and saddlepoint approximations i...

EP-GIG Priors and Applications in Bayesian Sparse Learning

In this paper we propose a novel framework for the construction of spars...

Testing Sparsity-Inducing Penalties

It is well understood that many penalized maximum likelihood estimators ...

M/D/1 Queues with LIFO and SIRO Policies

While symbolics for the equilibrium M/D/1-LIFO waiting time density are ...