Shrinkage with Robustness: Log-Adjusted Priors for Sparse Signals

01/23/2020 ∙ by Yasuyuki Hamura, et al. ∙ 0

We introduce a new class of distributions named log-adjusted shrinkage priors for the analysis of sparse signals, which extends the three parameter beta priors by multiplying an additional log-term to their densities. The key feature of the proposed prior is that its density tail is extremely heavy and heavier than even that of the Cauchy distribution, leading to the strong tail-robustness of Bayes estimator, while keeping the shrinkage effect on noises. The proposed prior has density tails that are heavier than even those of the Cauchy distribution and realizes the tail-robustness of the Bayes estimator, while keeping the strong shrinkage effect on noises. We verify this property via the improved posterior mean squared errors in the tail. An integral representation with latent variables for the new density is available and enables fast and simple Gibbs samplers for the full posterior analysis. Our log-adjusted prior is significantly different from existing shrinkage priors with logarithms for allowing its further generalization by multiple log-terms in the density. The performance of the proposed priors is investigated through simulation studies and data analysis.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.