A Horseshoe Pit mixture model for Bayesian screening with an application to light sheet fluorescence microscopy in brain imaging

by   Francesco Denti, et al.

Finding parsimonious models through variable selection is a fundamental problem in many areas of statistical inference. Here, we focus on Bayesian regression models, where variable selection can be implemented through a regularizing prior imposed on the distribution of the regression coefficients. In the Bayesian literature, there are two main types of priors used to accomplish this goal: the spike-and-slab and the continuous scale mixtures of Gaussians. The former is a discrete mixture of two distributions characterized by low and high variance. In the latter, a continuous prior is elicited on the scale of a zero-mean Gaussian distribution. In contrast to these existing methods, we propose a new class of priors based on discrete mixture of continuous scale mixtures providing a more general framework for Bayesian variable selection. To this end, we substitute the observation-specific local shrinkage parameters (typical of continuous mixtures) with mixture component shrinkage parameters. Our approach drastically reduces the number of parameters needed and allows sharing information across the coefficients, improving the shrinkage effect. By using half-Cauchy distributions, this approach leads to a cluster-shrinkage version of the Horseshoe prior. We present the properties of our model and showcase its estimation and prediction performance in a simulation study. We then recast the model in a multiple hypothesis testing framework and apply it to a neurological dataset obtained using a novel whole-brain imaging technique.



There are no comments yet.


page 14

page 15

page 34


Dynamic sparsity on dynamic regression models

In the present work, we consider variable selection and shrinkage for th...

Model selection for ecological community data using tree shrinkage priors

Researchers and managers model ecological communities to infer the bioti...

Generalized Beta Mixtures of Gaussians

In recent years, a rich variety of shrinkage priors have been proposed t...

EP-GIG Priors and Applications in Bayesian Sparse Learning

In this paper we propose a novel framework for the construction of spars...

Risk quantification for the thresholding rule for multiple testing using Gaussian scale mixtures

In this paper we study the asymptotic properties of Bayesian multiple te...

A Normal-Gamma Dirichlet Process Mixture Model

We propose a Dirichlet process mixture (DPM) for prediction and cluster-...

Large Multi-scale Spatial Kriging Using Tree Shrinkage Priors

We develop a multiscale spatial kernel convolution technique with higher...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.