Fast Monte Carlo Markov chains for Bayesian shrinkage models with random effects

11/18/2017 ∙ by Tavis Abrahamsen, et al. ∙ Duke University University of Florida 0

When performing Bayesian data analysis using a general linear mixed model, the resulting posterior density is almost always analytically intractable. However, if proper conditionally conjugate priors are used, there is a simple two-block Gibbs sampler that is geometrically ergodic in nearly all practical settings, including situations where p > n (Abrahamsen and Hobert, 2017). Unfortunately, the (conditionally conjugate) multivariate normal prior on β does not perform well in the high-dimensional setting where p ≫ n. In this paper, we consider an alternative model in which the multivariate normal prior is replaced by the normal-gamma shrinkage prior developed by Griffin and Brown (2010). This change leads to a much more complex posterior density, and we develop a simple MCMC algorithm for exploring it. This algorithm, which has both deterministic and random scan components, is easier to analyze than the more obvious three-step Gibbs sampler. Indeed, we prove that the new algorithm is geometrically ergodic in most practical settings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The general linear mixed model (or variance components model) is one of the most frequently applied statistical models. It takes the form

where is an observable

data vector,

and are known matrices, is an unknown vector of regression coefficients, are independent random vectors whose elements represent the various levels of the random factors in the model, and . The random vectors and are independent, and , where is , , and . (We assume throughout that , and that for each .) For a book-length treatment of this model and its many applications, see McCulloch et al. (2008).

In the Bayesian setting, prior distributions are assigned to and . Unfortunately, any non-trivial prior leads to an intractable posterior density. However, if and are assigned conditionally conjugate priors, then a simple two-block Gibbs sampler can be used to explore the resulting posterior density. In particular, if we assign a multivariate normal prior to , and independent gamma priors to the precision parameters, then, letting , it is easily shown that given observed data , is multivariate normal, and is a product of independent gammas. (Since is unobservable, it is treated like a parameter.) Convergence rate results for this block Gibbs sampler can be found in Abrahamsen and Hobert (2017).

Now consider this Bayesian mixed model in the high-dimensional setting where . This situation can arise, e.g., in genetics and neuroscience where variability between subjects is most appropriately handled with random effects (see, e.g., Fazli et al., 2011; Rohart et al., 2014). While the model described above could certainly be used in this setting, the multivariate normal prior on is really not suitable. Indeed, when , it is often assumed that is sparse, i.e., that many components of are zero. Unfortunately, the multivariate normal prior for will shrink

the estimated coefficients towards zero, but not enough to produce an (approximately) sparse estimate of

. Additionally, when the components of have varying magnitudes, the estimates of the “large” components will be shrunk disproportionately compared to the estimates of the “small” components. Below we propose an alternative prior for that is tailored to the high-dimensional setting.

The well-known Bayesian interpretation of the lasso (involving iid Laplace priors for the regression parameters) has led to a flurry of recent research concerning the development of prior distributions for regression parameters (in linear models without

random effects) that yield posterior distributions with high posterior probability around sparse values of

. These prior distributions are called continuous shrinkage priors and the corresponding statistical models are referred to as Bayesian shrinkage models (see, e.g., Bhattacharya et al. (2013, 2015), Griffin and Brown (2010), Polson and Scott (2010), and Park and Casella (2008)). One such Bayesian shrinkage model is the so-called normal-gamma model of Griffin and Brown (2010), which is given by

(1)

where and is a diagonal matrix with the s on the diagonal. The precision parameter, , and the components of are assumed to be a priori independent with and for . When , this model becomes the Bayesian lasso model introduced by Park and Casella (2008). We note that Bhattacharya et al. (2013, 2015) show that, in terms of frequentist optimality, the Bayesian lasso has sub-optimal prior concentration rates in that it does not place sufficient mass around sparse values of . Alternatively, shrinkage priors that have singularities at zero and robust tails (such as in the normal-gamma model with ), have been shown to perform well in empirical studies.

In this paper, we propose and analyze an MCMC algorithm for a new Bayesian general linear mixed model in which the standard multivariate normal prior on is replaced with the continuous shrinkage prior from the normal-gamma model. Our high-dimensional Bayesian general linear mixed model is defined as follows

(2)

where and are a priori independent with , for , and for . This model can be considered a Bayesian analog of the frequentist, high dimensional mixed model developed by Schelldorfer et al. (2011). (Of course, it can also be viewed as a mixed version of the normal-gamma shrinkage model.) A similar sparse Bayesian linear mixed model has been proposed by Zhou et al. (2013)

for polygenic modeling. They assume a “spike and slab” prior consisting of a mixture of a point mass at 0 and a normal distribution for the components of

. However, it is well-known that spike and slab priors lead to MCMC algorithms that have convergence problems, especially when is large (Polson and Scott (2010); Bhattacharya et al. (2015)).

Recall that , and let denote the posterior density associated with model (2

). This density is highly intractable and Bayesian inference requires MCMC, which should, of course, be based on a geometrically ergodic Monte Carlo Markov chain

(see, e.g. Roberts and Rosenthal, 1998; Jones and Hobert, 2001; Flegal et al., 2008). As we show in Section 2, the full conditional densities , , and all have standard forms, which means that there is a simple three-block Gibbs sampler available. Unfortunately, we have been unable to establish a convergence rate for this Gibbs sampler (in either deterministic or random scan form). However, we have been able to prove that a related hybrid algorithm does converge at a geometric rate. The invariant density of our Markov chain is . Let be fixed, and denote the Markov chain by . If the current state is , then we simulate the new state, , using the following three-step procedure.

  Iteration of the hybrid algorithm:

  1. Draw , and, independently, .

  2. If , set where

  3. Otherwise, set where

 

At each iteration, this sampler first performs a deterministic update of from its full conditional distribution. Next, a random scan update is performed, which updates either or with probability and , respectively. This sampler is no more difficult to implement than the three-block Gibbs sampler. Moreover, it is straightforward to show that the Markov chain driving this algorithm is reversible with respect to , and that it is Harris ergodic. This algorithm is actually a special case of a more general MCMC algorithm for Bayesian latent data models that was recently developed by Jung (2015).

Our main result provides a set of conditions under which the hybrid Markov chain defined above is geometrically ergodic (as defined by Jones and Hobert (2001, p.319)). Here is the result.

Proposition 1.

The Markov chain, , is geometrically ergodic for all if

  1. has full column rank,

  2. , and

  3. for each .

Note that the conditions of Proposition 1 are quite simple to check. We do require to be full column rank, which holds for most basic designs, but there is no such restriction on , so the result is applicable when . While the second condition may become restrictive when

, this can be mitigated to some extent by the fact that the user if free to choose any positive value for the hyperparameter

. Indeed, when a large value of is required to satisfy condition (2), can be chosen such that the prior mean and variance of (which are given by and , respectively) have reasonable values.

Abrahamsen and Hobert (2017) established simple, easily-checked sufficient conditions for geometric ergodicity of the two-block Gibbs sampler for the standard Bayesian general linear mixed model that assigns a multivariate normal prior to . Note, however, that unlike the multivariate normal prior, the continuous shrinkage prior that we use here is hierarchical, which means that an additional set of latent variables (the s) must be managed, and this leads to more complicated MCMC algorithms that are more difficult to analyze. On a related note, Pal and Khare (2014) obtained geometric ergodicity results for the Gibbs sampler developed for the original normal-gamma model (without random effects). But again, adding random effects to the normal-gamma model leads to new MCMC algorithms that are more complex and harder to handle.

The remainder of this paper is organized as follows. Section 2 contains a formal definition of the Markov chain that drives our hybrid sampler. A proof of Proposition 1 is given in Section 3. Finally, a good deal of technical material, such as proofs of the lemmas that are used in the main proof, has been relegated to the Appendices.

2 The Hybrid Sampler

In this section, we formally define the Markov transition function (Mtf) of the hybrid algorithm. We begin with a brief derivation of the conditional densities, , . Let , , and recall that . The posterior density can be expressed up to a constant of proportionality as

(3)

We will use (3) to derive the full conditional distributions of , and . First, it is shown in the Appendix that the full conditional distribution of is multivariate normal with

(4)

and

(5)

where , , and .

Next, it’s clear from (3) that the components of

are conditionally independent, and that each has a gamma distribution. Indeed,

(6)

and, for ,

(7)

Lastly,

Thus, the s are conditionally independent, and

(8)

This brings us to a subtle technical problem that turns out to be very important in our convergence rate analysis. Note that, if and , then the right-hand side of (8) is not integrable. Moreover, it is precisely these values of that yield effective shrinkage priors. Of course, from a simulation standpoint, this technical problem is a non-issue because we will never observe an exact zero from the normal distribution. However, in order to perform a theoretical analysis of the Markov chain, we are obliged to define the Mtf for all points in the state space. Our solution is to simply delete the offending points from the state space. (Alternatively, we could make a special definition of the Mtf at the offending points, but this leads to a Markov chain that lacks the Feller property (Meyn and Tweedie, 2009, p.124), and this prevents us from employing Meyn and Tweedie’s (2009) Lemma 15.2.8.) Thus, we define the state space of our Markov chain to be

Taking the state space to be instead of has no effect on posterior inference because the difference between these two sets is a set of measure zero. However, as will become clear in Section 3, the deleted points do create some complications in the drift analaysis. We note that this particular technical issue has surfaced before (see, e.g., Román and Hobert, 2012), although, in contrast with the current situation, the culprit is typically improper priors.

It’s clear from (8) that conditional on , and , the distribution of is , where

denotes the Generalized Inverse Gaussian distribution with parameters

, , and . The density is given by

(9)

where denotes the modified Bessel function of the second kind.

Fix , and let denote a measurable set in . The Mtf of the Markov chain that drives our hybrid algorithm is given by

where we (henceforth) omit the dependence on from the conditional densities for notational convenience. The Appendix contains a proof that the Markov chain defined by is a Feller chain. In the next section, we prove Proposition 1.

3 Proof of Proposition 1

This section contains a proof of Proposition 1. In particular, we establish a geometric drift condition for our Markov chain, , using a drift function, , that is unbounded off compact sets. Since our chain is Feller, geometric ergodicity then follows immediately from Meyn and Tweedie’s (2009) Theorem 6.0.1 and Lemma 15.2.8.

3.1 The drift function

Recall that in our model, . The hyperparameter will play a crucial role in our drift function. Define as

Now let , , and define the drift function as follows

(10)

where and . The values of these constants are to be determined. The Appendix contains a proof that is unbounded off compact sets.

Remark 1.

Using instead of as the state space has implications for the construction of the drift function. In particular, since the hyper-planes where are not part of , and we need the drift function to be unbounded off compact sets, we must have a term in the drift function that diverges as . In fact, this is the only reason why the term is part of . Interestingly, Pal and Khare (2014) established their convergence rate results using a drift/minorization argument, which does not require the drift function to be unbounded off compact sets, yet they still require this same term in their drift function. Fortunately, we are able to reuse some of the bounds that they developed.

Our goal is to demonstrate that

(11)

for some and some finite constant . First, note that

It follows that

(12)
Remark 2.

The hybrid sampler employs three full conditional densities, yet the expression above contains only two nested expectations. This is due to the random scan step in the hybrid sampler. In contrast, a drift analysis of the deterministic scan Gibbs sampler for this problem involves similar equations having three nested expectations, which greatly complicates the calculations. On the other hand, a drift analysis of the random scan Gibbs sampler involves no nested expectations, which suggests that such an analysis would be relatively easy. However, it turns out to be more difficult than the analysis of the hybrid algorithm.

Now define

and

In order to bound (12), we need to develop bounds for and . These will be handled separately in the next two subsections.

3.2 A bound on

Let

so that for . It’s easy to see that

(13)

Hence,

(14)

We will bound (14) using the following lemmas, which are proven in the Appendix.

Lemma 1.

For all and ,

  • ,   and

  • ,

  • .

Lemma 2.

For all and ,

Lemma 3.

For all and ,

  • ,   and

  • .

where

is the largest singular value of

and is a finite positive constant.

Lemma 4.

For all and ,

where

and is the largest singular value of .

Lemma 5.

Assume that has full column rank. For all , and ,

  • ,   and

  • .

Substituting into (14) gives

(15)

where

3.3 A bound on

From (6), we have

(16)

and

(17)

Also, from Jensen’s inequality and (16),

(18)

Similarly, from (7), for we have

(19)

and

(20)

Conditions (2) and (3) of Proposition 1 imply that all of the above Gamma functions have positive arguments.

From (16)-(20), we have

(21)