Efficient Metropolis-Hastings Sampling for Nonlinear Mixed Effects Models

10/26/2019 ∙ by Belhal Karimi, et al. ∙ 0

The ability to generate samples of the random effects from their conditional distributions is fundamental for inference in mixed effects models. Random walk Metropolis is widely used to conduct such sampling, but such a method can converge slowly for medium dimension problems, or when the joint structure of the distributions to sample is complex. We propose a Metropolis Hastings (MH) algorithm based on a multidimensional Gaussian proposal that takes into account the joint conditional distribution of the random effects and does not require any tuning, in contrast with more sophisticated samplers such as the Metropolis Adjusted Langevin Algorithm or the No-U-Turn Sampler that involve costly tuning runs or intensive computation. Indeed, this distribution is automatically obtained thanks to a Laplace approximation of the original model. We show that such approximation is equivalent to linearizing the model in the case of continuous data. Numerical experiments based on real data highlight the very good performances of the proposed method for continuous data model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Mixed effects models are reference models when the inter-individual variability that can exist within the same population is considered (see Lavielle (2014)

and the references therein). Given a population of individuals, the probability distribution of the series of observations for each individual depends on a vector of individual parameters. For complex priors on these individual parameters or models, Monte Carlo methods must be used to approximate the conditional distribution of the individual parameters given the observations. Most often, direct sampling from this conditional distribution is impossible and it is necessary to have resort to a Markov chain Monte Carlo (MCMC) procedure.

Designing a fast mixing sampler is of utmost importance for several tasks in the complex process of model building. The most common MCMC method for nonlinear mixed effects models is the random walk Metropolis algorithm Robert and Casella (2004); Roberts et al. (1997); Lavielle (2014). Despite its simplicity, it has been successfully used in many classical examples of pharmacometry, when the number of random effects is not too large. Nevertheless, maintaining an optimal acceptance rate (advocated in Roberts et al. (1997)) most often implies very small moves and therefore a very large number of iterations in medium and high dimensions since no information of the geometry of the target distribution is used.

To make better use of this geometry and in order to explore the space faster, the Metropolis-adjusted Langevin algorithm (MALA) uses evaluations of the gradient of the target density for proposing new states which are accepted or rejected using the Metropolis-Hastings algorithm Roberts and Tweedie (1996); Stramer and Tweedie (1999). The No-U-Turn Sampler (NUTS) is an extension of the Hamiltonian Monte Carlo (Neal and others, 2011) that allows an automatic and optimal selection of some of the settings required by the algorithm, Betancourt (2017). Nevertheless, these methods may be difficult to use in practice, and are computationally involved, in particular when the structural model is a complex ODE based model.

The algorithm we propose is a Metropolis-Hastings algorithm, but for which the proposal is a good approximation of the the target distribution. For general data model (i.e. categorical, count or time-to-event data models or continuous data models), the Laplace approximation of the incomplete pdf leads to a Gaussian approximation of the conditional distribution .

In the special case of continuous data, linearisation of the model leads, by definition, to a Gaussian linear model for which the conditional distribution of the individual parameter given the data

is a multidimensional normal distribution that can be computed and we fall back on the results of

(Karimi et al., 2017).

2 Mixed Effect Models

2.1 Population Approach and Hierarchical Models

We will adopt a population approach in the sequel, where we consider individuals and observations for individual . The set of observed data is where are the observations for individual . For the sake of clarity, we assume that each observation takes its values in some subset of . The distribution of the vector of observations depends on a vector of individual parameters that takes its values in a subset of . We assume that the pairs

are mutually independent and consider a parametric framework: the joint distribution of

is denoted by , where is the vector of fixed parameters of the model. A natural decomposition of this joint distribution writes , where is the conditional distribution of the observations, given the individual parameters, and where is the so-called population distribution used to describe the distribution of the individual parameters within the population. A particular case of this general framework consists in describing each individual parameters as a typical value , and a vector of individual random effects :

. In the sequel, we will assume a multivariate Gaussian distribution for the random effects:

. Several extensions of this model are straightforward, considering for instance transformation of the normal distribution, or adding individual covariates in the model.

2.2 Continuous Data Models

A regression model is used to express the link between continuous observations and individual parameters:

(1)

where is the j-th observation for individual measured at time , is the residual error, is the structural model assumed to be a twice differentiable function of

. We start by assuming that the residual errors are independent and normally distributed with zero-mean and a constant variance

. Let be the vector of observation times for individual . Then, the model for the observations rewrites where . If we assume that , then the parameters of the model are .

3 Sampling from Conditional Distributions

The conditional distribution plays a crucial role in most methods used for inference in nonlinear mixed effects models.

One of the main task to perform is to compute the maximum likelihood (ML) estimate of

, , where . The stochastic approximation version of EM Delyon et al. (1999) is an iterative procedure for ML estimation that requires to generate one or several realisations of this conditional distribution at each iteration of the algorithm.

Metropolis-Hasting algorithm is a powerful MCMC procedure widely used for sampling from a complex distribution (Brooks et al., 2011). To simplify the notations, we remove the dependency on . For a given individual , the MH algorithm, to sample from the conditional distribution , is described as:

Initialization: Initialize the chain sampling from some initial distribution .
Iteration k: given the current state of the chain :

  1. Sample a candidate from a proposal distribution .

  2. Compute the MH ratio:

    (2)
  3. Set with probability (otherwise, keep ).

Algorithm 1 Metropolis-Hastings algorithm

Current implementations of the MCMC algorithm, to which we will compare our new method, in Monolix Chan et al. (2011), saemix (R package) Comets et al. (2017), nlmefitsa (Matlab) and NONMEM Beal and Sheiner (1980) mainly use the same combination of proposals. The first proposal is an independent Metropolis-Hasting algorithm which consists in sampling the candidate state directly from the marginal distribution of the individual parameter . The other proposals are component-wise and block-wise random walk procedures Metropolis et al. (1953) that updates different components of using univariate and multivariate Gaussian proposal distributions. Nevertheless, those proposals fail to take into account the nonlinear dependence structure of the individual parameters. A way to alleviate these problems is to use a proposal distribution derived from a discretised Langevin diffusion whose drift term is the gradient of the logarithm of the target density leading to the Metropolis Adjusted Langevin Algorithm (MALA) (Roberts and Tweedie, 1996; Stramer and Tweedie, 1999). The MALA proposal is given by:

(3)

where is a positive stepsize. These methods still do not take into consideration the multidimensional structure of the individual parameters. Recent works include efforts in that direction, such as the Anisotropic MALA for which the covariance matrix of the proposal depends on the gradient of the target measure (Allassonniere and Kuhn, 2013). The MALA algorithm is a special instance of the Hybrid Monte Carlo (HMC), introduced in (Neal and others, 2011); see (Brooks et al., 2011) and the references therein, and consists in augmenting the state space with an auxiliary variable , known as the velocity in Hamiltonian dynamics.

All those methods aim at finding the proposal that accelerates the convergence of the chain. Unfortunately they are computationally involved and can be difficult to implement (stepsizes and numerical derivatives need to be tuned and implemented).

We see in the next section how to define a multivariate Gaussian proposal for both continuous and noncontinuous data models, that is easy to implement and that takes into account the multidimensional structure of the individual parameters in order to accelerate the MCMC procedure.

4 A Multivariate Gaussian Proposal

For a given parameter value , the MAP estimate, for individual , of is the one that maximises the conditional distribution :

4.1 General Data Models

For both continuous and noncontinuous data models, the goal is to find a simple proposal, a multivariate Gaussian distribution in our case, that approximates the target distribution . In our context, we can write the marginal pdf that we aim to approximate as . Then, the Taylor expansion of around the MAP (that verifies by definition ) yields the Laplace approximation of as follows:

We thus obtain the following approximation of :

which is precisely the log-pdf of a multivariate Gaussian distribution with mean and variance-covariance , evaluated at .

Proposition 1

The Laplace approximation of the conditional distribution is a multivariate Gaussian distribution with mean and variance-covariance

We shall now see another method to derive a Gaussian proposal distribution in the specific case of continuous data models.

4.2 Nonlinear Continuous Data Models

When the model is described by (1), the approximation of the target distribution can be done twofold: either by using the Laplace approximation, as explained above, or by linearizing the structural model for any individual of the population. Once the MAP estimate has been computed, using an optimisation procedure, the method is based on the linearisation of the structural model around :

(4)

where is the Jacobian matrix of the vector . Defining yields a linear model which tractable conditional distribution can be used for approximating :

Proposition 2

Under this linear model, the conditional distribution is a Gaussian distribution with mean and variance-covariance where

(5)

We can note that linearizing the structural model is equivalent to using the Laplace approximation with the expected information matrix. Indeed:

(6)

We then use this normal distribution as a proposal in algorithm 1 for model (1).

5 A Pharmacokinetic Example

5.1 Data and Model

32 healthy volunteers received a 1.5 mg/kg single oral dose of warfarin, an anticoagulant normally used in the prevention of thrombosis O’Reilly and Aggeler (1968), for who we measure warfarin plasmatic concentration at different times. We will consider a one-compartment pharmacokinetics (PK) model for oral administration, assuming first-order absorption and linear elimination processes:

(7)

where is the absorption rate constant, the volume of distribution , the elimination rate constant, and the dose administered. Here, , and are PK parameters that can change from one individual to another. Then, let be the vector of individual PK parameters for individual lognormally distributed. We will assume in this example that the residual errors are independent and normally distributed with mean 0 and variance . We can use the proposal given by Proposition 2 and based on a linearisation of the structural model proposed in (7). For the method to be easily extended to any structural model, the gradient is calculated by automatic differentiation using the R package ‘Madness’ (Pav, 2016).

5.2 MCMC Convergence Diagnostic

We will consider one of the 32 individuals for this study and fix to some arbitrary value, close to the Maximum Likelihood (ML) estimate obtained with SAEM (saemix R package Comets et al. (2017)): , , , , , and . First, we compare our our nlme-IMH, which is a MH sampler using our new proposal, with the RWM, the MALA, which proposal, at iteration , is defined by . The stepsize () is constant and is tuned such that the optimal acceptance rate of is reached Roberts et al. (1997). iterations are run for each algorithm. Figure 1

highlights quantiles stabilisation using the MALA similar to our method for all orders and dimensions. The NUTS, implemented in rstan (R Package

Stan Development Team (2018)), is fast and steady and presents similar, or even better convergence behaviors for some quantiles and dimension, than our method (see Figure 1).

Figure 1: Modelling of the warfarin PK data: convergence of the empirical quantiles of order 0.1, 0.5 and 0.9 of for a single individual. Our new MH algorithm is in red and dotted, the RWM is in black and solid, the MALA is in blue and dashed and the NUTS is in green and dashed.

Then, we produce independent runs of the RWM, the IMH using our proposal distribution (called the nlme-IMH algorithm), the MALA and the NUTS for iterations. The boxplots of the samples drawn at a given iteration threshold are presented Figure  2 against the ground truth (calculated running the NUTS for iterations) for the parameter ka.

For the three numbers of iteration considered in Figure 2, the median of the nlme-IMH and NUTS samples are closer to the ground truth. Figure  2 also highlights that all those methods succeed in sampling from the whole distribution after iterations. Similar comments can be made for the other parameters.

Figure 2: Modelling of the warfarin PK data: Boxplots, over parallel runs, for the RWM, the nlme-IMH, the MALA and the NUTS algorithm. The ground truth median, and percentiles are plotted as a dashed purple line and its maximum and minimum as a dashed grey line.

We decided to conduct a comparison between those sampling methods just in terms of number of iterations (one iteration is one transition of the Markov Chain). We acknowledge that the transition cost is not the same for each of those algorithms, though, our nmle-IMH algorithm, except the initialisation step that requires a MAP and a Jacobian computation, has the same iteration cost as RWM. The call to the structural model being very costly in real applications (when the model is the solution of a complex ODE for instance), the MALA and the NUTS, computing its first order derivatives at each transition, are thus far computationally involved.

Since computational costs per transition are hard to accurately define for each sampling algorithm and since runtime depends on the actual implementation of those methods, comparisons are based on the number of iterations of the chain here.

6 Conclusion and Discussion

We presented in this article an independent Metropolis-Hastings procedure for sampling random effects from their conditional distributions in nonlinear mixed effects models. The numerical experiments that we have conducted show that the proposed sampler converges to the target distribution as fast as state-of-the-art samplers. This good practical behaviour is partly explained by the fact that the conditional mode of the random effects in the linearised model coincides with the conditional mode of the random effects in the original model. Initial experiments embedding this fast and easy-to-implement IMH algorithm within the SAEM algorithm Delyon et al. (1999), for Maximum Likelihood Estimation, indicate a faster convergence behavior.

Bibliography

  • S. Allassonniere and E. Kuhn (2013)

    Convergent stochastic expectation maximization algorithm with efficient sampling in high dimension. Application to deformable template model estimation

    .
    arXiv preprint arXiv:1207.5938. Cited by: §3.
  • S. Beal and L. Sheiner (1980) The NONMEM system. The American Statistician 34 (2), pp. 118–119. Cited by: §3.
  • M. Betancourt (2017) A conceptual introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434. Cited by: §1.
  • S. Brooks, A. Gelman, G. Jones, and X. Meng (2011) Handbook of markov chain monte carlo. CRC press. Cited by: §3, §3.
  • P. L. S. Chan, P. Jacqmin, M. Lavielle, L. McFadyen, and B. Weatherley (2011) The use of the SAEM algorithm in MONOLIX software for estimation of population pharmacokinetic-pharmacodynamic-viral dynamics parameters of maraviroc in asymptomatic HIV subjects. Journal of Pharmacokinetics and Pharmacodynamics 38 (1), pp. 41–61. Cited by: §3.
  • E. Comets, A. Lavenu, and M. Lavielle (2017) Parameter estimation in nonlinear mixed effect models using saemix, an R implementation of the SAEM algorithm. Journal of Statistical Software 80 (3), pp. 1–42. Cited by: §3, §5.2.
  • B. Delyon, M. Lavielle, and E. Moulines (1999) Convergence of a stochastic approximation version of the EM algorithm. Ann. Statist. 27 (1), pp. 94–128. External Links: Document Cited by: §3, §6.
  • B. Karimi, M. Lavielle, and E. Moulines (2017) Non linear mixed effects models: bridging the gap between independent metropolis hastings and variational inference. ICML 2017 Implicit Models Workshop. Cited by: §1.
  • M. Lavielle (2014) Mixed effects models for the population approach: models, tasks, methods and tools. CRC press. Cited by: §1, §1.
  • N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller (1953) Equation of state calculations by fast computing machines. The Journal of Chemical Physics 21 (6), pp. 1087–1092. External Links: Document Cited by: §3.
  • R. M. Neal et al. (2011) MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo 2 (11). Cited by: §1, §3.
  • R. A. O’Reilly and P. M. Aggeler (1968) Studies on Coumarin anticoagulant drugs initiation of Warfarin therapy without a lading dose. Circulation 38 (1), pp. 169–177. Cited by: §5.1.
  • S. E. Pav (2016) Madness: a package for multivariate automatic differentiation. Cited by: §5.1.
  • C. P. Robert and G. Casella (2004) Monte carlo statistical methods. Springer Texts in Statistics. Cited by: §1.
  • G. O. Roberts, A. Gelman, and W. R. Gilks (1997) Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Probab. 7 (1), pp. 110–120. External Links: Document Cited by: §1, §5.2.
  • G. O. Roberts and R. L. Tweedie (1996) Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli 2 (4), pp. 341–363. Cited by: §1, §3.
  • Stan Development Team (2018) RStan: the R interface to Stan. Note: R package version 2.17.3 External Links: Link Cited by: §5.2.
  • O. Stramer and R. L. Tweedie (1999) Langevin-type models i: diffusions with given stationary distributions and their discretizations. Methodology And Computing In Applied Probability 1 (3), pp. 283–306. External Links: ISSN 1573-7713, Document Cited by: §1, §3.