Prior-preconditioned conjugate gradient method for accelerated Gibbs sampling in "large n & large p" sparse Bayesian regression

10/29/2018
by   Akihiko Nishimura, et al.
0

In a modern observational study based on healthcare databases, the number of observations typically ranges in the order of 10^5 10^6 and that of the predictors in the order of 10^4 10^5. Despite the large sample size, data rarely provide sufficient information to reliably estimate such a large number of parameters. Sparse regression provides a potential solution. Bayesian approaches based on shrinkage priors possess many desirable theoretical properties and, under linear and logistic models, yield posterior distributions amenable to Gibbs sampling. A major computational bottleneck arises in the "large n & large p" setting, however, from the need to sample from a high-dimensional Gaussian distribution at each iteration; despite the availability of a closed-form expression for the precision matrix Φ, computing and factorizing such a large matrix is computationally expensive nonetheless. In this article, we present a novel algorithm to speed up this bottleneck based on the following observation: we can cheaply generate a random vector b such that the solution to the linear system Φβ = b has the desired Gaussian distribution. We can then solve the linear system by the conjugate gradient (CG) algorithm through the matrix-vector multiplications by Φ, without ever explicitly inverting Φ. As practical performance of CG depends critically on appropriate preconditioning of the linear system, we develop a theory of prior-preconditioning to turn CG into a highly effective algorithm for sparse Bayesian regression. We apply our algorithm to a clinically relevant large-scale observational study with n = 72,489 and p = 22,175, designed to assess the relative risk of intracranial hemorrhage from two alternative blood anti-coagulants. Our algorithm demonstrates an order of magnitude speed-up in the posterior computation.

READ FULL TEXT
research
05/11/2021

Sketching in Bayesian High Dimensional Regression With Big Data Using Gaussian Scale Mixture Priors

Bayesian computation of high dimensional linear regression models with a...
research
08/28/2023

Applications of Conjugate Gradient in Bayesian computation

Conjugate gradient is an efficient algorithm for solving large sparse li...
research
01/12/2021

Bayesian inference in high-dimensional models

Models with dimension more than the available sample size are now common...
research
07/09/2023

From Estimation to Sampling for Bayesian Linear Regression with Spike-and-Slab Prior

We consider Bayesian linear regression with sparsity-inducing prior and ...
research
12/20/2021

Convergence properties of data augmentation algorithms for high-dimensional robit regression

The logistic and probit link functions are the most common choices for r...
research
08/01/2020

Posterior Impropriety of some Sparse Bayesian Learning Models

Sparse Bayesian learning models are typically used for prediction in dat...

Please sign up or login with your details

Forgot password? Click here to reset