Estimating the Expected Value of Sample Information across Different Sample Sizes using Moment Matching and Non-Linear Regression

04/25/2018 ∙ by Anna Heath, et al. ∙ 0

Background: The Expected Value of Sample Information (EVSI) determines the economic value of any future study with a specific design aimed at reducing uncertainty in a health economic model. This has potential as a tool for trial design; the cost and value of different designs could be compared to find the trial with the greatest net benefit. However, despite recent developments, EVSI analysis can be slow especially when optimising over a large number of different designs. Methods: This paper develops a method to reduce the computation time required to calculate the EVSI across different sample sizes. Our method extends the moment matching approach to EVSI estimation to optimise over different sample sizes for the underlying trial with a similar computational cost to a single EVSI estimate. This extension calculates posterior variances across the alternative sample sizes and then uses Bayesian non-linear regression to calculate the EVSI. Results: A health economic model developed to assess the cost-effectiveness of interventions for chronic pain demonstrates that this EVSI calculation method is fast and accurate for realistic models. This example also highlights how different trial designs can be compared using the EVSI. Conclusion: The proposed estimation method is fast and accurate when calculating the EVSI across different sample sizes. This will allow researchers to realise the potential of using the EVSI to determine an economically optimal trial design for reducing uncertainty in health economic models. Limitations: Our method relies on some additional simulation, which can be expensive in models with very large computational cost.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Expected Value of Sample Information (EVSI) [32] quantifies the expected economic benefit of a data collection exercise with a given design. In general, new data update information about the parameters underlying a health economic model. This, in turn, updates information about the optimal treatment or intervention and thereby reduces the possibility of funding an inefficient treatment, which may only have been deemed to be economically efficient due to deficiencies in the current information.

The EVSI is a Value of Information (VoI) measure [22] used to analyse the potential benefit of resolving uncertainty in a decision model. Amongst these measures, the EVSI has the greatest potential to inform research decisions as it quantifies the economic benefit of a specific study. This value can be compared directly with the cost of the trial to determine whether a study should go ahead, making the EVSI an important tool for decision makers and funders looking to efficiently allocate research and development resources [26].

In addition to this, the EVSI has potential as a tool for trial design when it is calculated for a number of different designs [40]. The trial with the largest “net” economic value is then selected, by considering the cost of implementation alongside the EVSI. Using the EVSI in this setting ensures that the data collected in the study are sufficient to reliably inform the health economic model. Therefore, following the study, the cost-effectiveness of the different treatments can be correctly assessed, which is a key requirement within publicly funded health systems where treatments/interventions must demonstrate cost-effectiveness over competitors [11, 13, 9, 12]. However, the computational time required to calculate the EVSI based on realistic health economic models has restricted its practical applications [33] meaning that studies are normally designed using power calculations to determine an appropriate sample size.

With the exception of cases in which the health economic model conforms to certain conditions [1, 39, 7], traditionally, the EVSI has been estimated using computationally intensive nested Monte Carlo simulations [28, 2, 31, 38]. However, recent methodological advances have employed various methods to avoid the need for full nested simulations, irrespective of the underlying model structure [35, 27, 25, 21, 24]. In theory, these methods have unlocked the power of EVSI analysis for use in research prioritisation and study design.

Unfortunately, using the EVSI as a tool to identify the optimal study design still requires calculations to be made across a number of different possible data collection exercises, e.g. changing the sample size or the main outcome. In this paper, we present an extension to the “moment matching” estimation method [21, 17] that calculates the EVSI across different sample sizes for the future study with approximately the same computational cost as a single estimate.

Our extension uses Bayesian non-linear regression to estimate the EVSI across different sample sizes, by incorporating prior information about the behaviour of EVSI. It also models uncertainty in the EVSI that arises from its estimation using simulation. The resulting quantification of uncertainty in the EVSI estimate allows funders to assess the accuracy of their research and development decisions.

The moment matching method only requires that the health economic model is based on a Bayesian model that incorporates the data arising from the study. This requirement comes directly from the definition of the EVSI and imposes no restrictions on the complexity of the health economic model or data collection exercise used to calculate the EVSI. Additionally, this requirement allows EVSI calculations to be undertaken whilst considering common trial issues such as missingness or loss to drop out in the data. Therefore, this extension to the moment matching method can find the optimal sample size while taking into account realistic assumptions about the study and the model.

To present this extension, we briefly introduce the EVSI in §2 and the standard moment matching method [20] in §3. Following this, we present a Bayesian non-linear model that can be used to estimate the EVSI across different sample sizes based on one simulation per sample size in §4. Finally, we then present a practical implementation of this method using a real-life health economic model developed to compare treatments for chronic pain [36]. We compare our estimates with the computationally intensive nested Monte Carlo simulation method to demonstrate its accuracy. We also explore how the EVSI can be used as a tool for trial design by considering two alternative designs across a range of sample sizes.

2 The Expected Value of Sample Information

The EVSI quantifies the economic benefit of reducing uncertainty in a health economic model using a specific method for data collection. Therefore, we first define a health economic model to compare treatments for a specific disease/health state. This is typically defined in terms of a large vector of parameters

, used to describe the properties of the underlying disease, for example survival probabilities, Quality of Life measures and health service costs. In a Bayesian setting, a joint probability distribution

defines the current uncertainty level in these parameters and is typically informed by a number of different data sources such as clinical trials augmented by additional information from the literature in a process known as Probabilistic Sensitivity Analysis (PSA) [5]. The corresponding frequentist bootstrap approach to PSA typically models each parameter in individually to represent the current level of uncertainty in the model parameters.

In a health economic model, the value of each treatment is normally defined using a “net benefit” function dependent on the model parameters, NB for possible interventions/alternatives. The probability distribution then induces a distribution for the net benefit for each treatment. Typically, we consider that variation due to individual level response to treatments had been marginalised out, so if the parameters were known exactly then the net benefit of each treatment would also be known. The optimal treatment given current evidence is then found by averaging out all uncertainty for each treatment, , and then finding the most valuable treatment, .

However, the current uncertainty in the parameters may suggest that the incorrect treatment is optimal. This is because additional information gleaned from a study would update the distribution of the model parameters which would then update the distribution of the net benefit. This could change the optimal treatment, indicating that the current information is not sufficient to make the correct treatment recommendation, for example, additional side effects could be observed when a new treatment is monitored for a longer period of time in a greater number of patients. In this case, the additional data have value as patients are prevented from receiving an inefficient treatment and financial resources are saved.

Formally, a study collects data, denoted , which update the information about the model parameters . This updating falls within the Bayesian paradigm in which we consider as the prior distribution for the model parameters. The sampling distribution for the data is combined with the prior to determine a posterior for model parameters . The net benefit for each treatment is calculated as a function of and the optimal treatment found by maximising the posterior expected net benefit. The economic value of the study is then the difference between the expected net benefit of the optimal treatment with the updated information and the current information.

However, as the EVSI is calculated before the data have been collected, the study outcome is unknown. Therefore, we consider a distribution for all the possible datasets that could arise from the study and calculate the value of every study outcome. The EVSI is the average study value over all the possible future datasets. Mathematically, this is expressed as

(1)

where the distribution of should be consistent with how the data would be analysed if the study were run. This means that we define a sampling distribution for the data given the parameters, , which is combined with to give the distribution of all the possible samples.

It is almost impossible to calculate the EVSI analytically, so in practical scenarios it is estimated using simulations. Computationally, the challenge is to compute the inner “posterior” expectation for the net benefit of each treatment for each possible study, for . If this posterior expectation is available in closed form, then estimating the EVSI by simulation requires little computational effort as we simply require a large number of simulated study outcomes. However, in practical models this analytic expectation will rarely be available so alternative methods must be used. The most general method is to estimate the inner expectation using simulations from the posterior of , conditional on each of the simulated study outcomes. This means that the simulations required to estimate the posterior expectation are nested within the simulations of possible study outcomes, leading to a nested Monte Carlo simulation method with relatively high computational cost as typically around 1000 simulated study outcomes are used.

Our “moment matching” method also estimates the EVSI using simulation but circumvents the need for full nested Monte Carlo simulation by estimating the “posterior” expectation across all the potential samples using a small number, between 30-50, of simulated future datasets [21].

3 The moment matching method

Thus far, we have presented the EVSI for health economic models that compare treatments. However, for simplicity, we restrict ourselves to the dual-decision setting, i.e. , before returning to the general setting in the supplementary material. To simplify the calculations still further, we work directly with the incremental net benefit, i.e. . The incremental net benefit contains all the information required to make a decision between treatment 1 and treatment 2. Specifically, treatment 1 is optimal if the expected incremental net benefit is positive and treatment 2 is optimal otherwise. Therefore, to calculate the EVSI, we must estimate the posterior expectation of INB across the different study outcomes, which we denote . To further clarify the moment matching method, a full list of notation is given in Table 1 at the end of §4.

3.1 Estimating the distribution of using moment matching

Formally, the moment matching method is used to estimate the distribution of , denoted , which is induced by the distribution of the potential datasets . Specifically, the moment matching method exploits the similarity between and the distribution of a specific function of INB to reduce the required number of nested Monte Carlo simulations [21]. To find this function of INB, we assume that the future sample only directly updates a subset of the model parameters ; for example, a clinical trial would inform the primary clinical outcome but is unlikely to give information on the societal costs of the disease. The distribution of is then expected to be similar to the distribution of

(2)

except that the variance of is smaller than the variance of INB [21]. In practice, therefore, the moment matching method approximates by linearly rescaling the simulated values of to reduce their variance so it is equal to the variance of , denoted , which must also be estimated.

3.2 Estimating simulated values of

Simulated values of , denoted INB for , can be found in a computationally efficient manner using methods developed for the calculation of the Expected Value of Partial Perfect Information (EVPPI), which is based on the expectation in equation (2). The main reason for using these methods is that the EVPPI is an upper bound for the EVSI and is much simpler to compute. As the EVSI must be large to indicate that a study is worthwhile, a study with a small EVPPI, the upper limit of the value, will not be economically viable, irrespective of design. Therefore, the EVPPI should always be calculated before the EVSI as a screening procedure [37] implying that INB should be available before attempting to calculate the EVSI.

In most health economic models, an EVPPI calculation method based on non-parametric regression is fast and accurate [20]. This uses regression to estimate INB and is easily generalisable implying that general purpose software [4] or stand alone functions [34] can estimate based solely on the PSA simulations for and .

3.3 Estimating the variance of

The variance of , denoted , is estimated using the law of total variance

Firstly, the variance of , denoted , can be estimated by the sample variance of the PSA simulations of . Therefore, can be estimated by determining the expected posterior variance of the incremental net benefit, where the expectation is taken across the different potential datasets.

This expected posterior variance for INB can actually be estimated accurately using a small number of nested simulations, with , provided the nested samples are undertaken using a specific method, demonstrated pictorially in Figure 1. This method begins with a standard PSA procedure, represented in the left-hand panel in Figure 1 with . In this procedure, the parameters are simulated from and then used to calculate the INB.

To proceed, we select the columns of the PSA matrix that contain the simulations for the parameters of interest . These columns are then ordered separately, represented by the middle panel in Figure 1. Finally,

equally spaced values are selected from these ordered lists to find the sample quantiles for each element of

. For each row of sample quantiles, , , one sample is simulated from , indicated by , represented in the right hand graphic. To estimate the expected posterior variance, each of these simulated future samples, , is used to update the posterior for the model parameters and thereby induces a posterior distribution for INB. In turn, this is estimated using simulations, with which the posterior variance of INB, denoted , can be estimated for each future sample. Finally, the variance is estimated by taking the average of the and subtracting this from ;

Figure 1: A pictorial representation of the simulation method used to estimate the posterior variances for . The left-hand panel represents the standard PSA procedure, darker lines represent large values for the variable of interest while lighter lines represent smaller values. The middle panel selects the parameters of interest and reorders them smallest to largest. The right-hand panel demonstrates that the quantiles for should be selected and then a dataset should be simulated for each row.

3.4 Linear rescaling of INB to estimate the EVSI

Once has been estimated, the EVSI is calculated by rescaling so the variance of these simulated values is equal to . This is achieved by first calculating

(3)

where is estimated using the PSA simulations of and is the sample variance of , . The EVSI is then calculated from by taking

(4)

4 Calculating the EVSI for different sample sizes

A key aspect of designing a trial is to determine the number of patients that should be enrolled. Typically, this is achieved using power calculations that determine the trial sample size by ensuring that a hypothesis test concerning the primary outcome has desirable properties [10]. To use the EVSI alongside, or as an alternative to, these power calculations requires that the EVSI is estimated across different sample sizes. For example, financial or clinical constraints limit the possible sample sizes for a trial and then the EVSI must be estimated on a grid of sample sizes between these limits, .

Using the moment matching method to estimate the EVSI across these sample sizes would require posterior updates. This quickly becomes infeasible as , the number of sample sizes considered, increases. Therefore, this section extends the moment matching method to estimate the EVSI across different sample sizes by setting and only requiring one posterior update per sample size. This reduces the required number of posterior updates from to — the same as the standard method.

For this extension, each potential data set is simulated conditional on the vector , rather than solely on , represented pictorially in Figure 2. Once these datasets , with different sample sizes, have been simulated the extended moment matching method proceeds in the standard method; each dataset updates the distribution of the model parameters and then the posterior variance of the incremental net benefit is calculated, giving for .

However, to then estimate the EVSI across sample size, we develop an alternative strategy to estimate from . This is because we are now interested in estimating , i.e. the variance of the expected posterior mean for different samples of size . In a similar manner to Müller and Parmigiani [29], we use regression methods to estimate a function , using . The estimated values are generated conditional on a specific future dataset , of size . This dataset represents one possible future, which is subject to variation due to random sampling and dependence on the specific value of . In the standard moment matching method, we marginalise out this dependence on the specific value by taking the mean of . However, in this extended setting, we use regression to marginalise out this dependence across the different sample sizes . Therefore, we use the values of , obtained by simulation, to estimate the function as

where is an error term that captures variation in due to using simulated datasets with sample size .

4.1 A Bayesian non-linear model

To estimate , we define a Bayesian non-linear model which allows us to encode our knowledge about the underlying behaviour of . It also allows us to propagate uncertainty from our estimation procedure into the EVSI estimate so researchers can assess the accuracy of the moment matching procedure.

Our model is defined by assuming a functional form for the regression function,

where and

is a parameter to be estimated, and by modelling the residual error as a normal distribution centred on 0,

for . This functional form for reflects the underlying knowledge about . Specifically, we know that increases as increases and is an increasing function. In addition to this, a study with an infinite sample size would be equivalent to gaining perfect information about the model parameters . This means that tends to INB as , so for an infinite sample size we have . Finally, the functional form of is the exact relationship between and in normal-normal conjugate settings with only one underlying model parameter. So, while realistic health-economic models are not based on normal-normal conjugate models and have multiple model parameters, this function is approximately correct in many settings.

To define the full Bayesian model, priors must be defined for , the regression parameter, and , the residual variance. Throughout, we have used a non-central half-Cauchy prior for , as suggested by Gelman [15], with parameters dependent on the data. This is because the scale of

changes significantly across health economic models. Therefore, the prior mean is set to the standard deviation of

divided by 2 and the prior variance to the standard deviation of . Finally, a data dependent normal prior, truncated at 0, is set for with the prior mean and variance set to and respectively, giving vague priors for and .

4.2 Calculating the EVSI for different sample sizes

To calculate the EVSI, we fit this non-linear model using the estimated posterior variances from the health economic model

. Fitting this model requires Markov Chain Monte Carlo methods to estimate the posterior for

by simulation. The posterior distribution for , irrespective of whether is in , can be estimated by calculating

for each posterior simulation for . In theory, each simulation from the posterior distribution of can be used to rescale , equation (3), and estimate the EVSI using equation (4).

However, this process is computationally intensive if there is a large number of posterior simulations for . Therefore, it is more efficient to calculate the EVSI for a small number of samples from the posterior of . We advise that the posterior of

is summarised by finding credible intervals for a small number of credible levels. The EVSI is then calculated for each estimated variance in this low-dimensional summary. For example, in §

5, the posteriors for are summarised using the median and the 75% and 95% credible intervals. Note that, the values calculated using this method are not the posterior credible intervals for the EVSI as the relationship between the and the EVSI is highly non-linear. Code to calculate the EVSI using this method is provided in the supplimentary material.

Notation Definition

The distribution of a random variable.

The underlying model parameters for the health economic model.
The model parameters that are informed by the trial under consideration.
The number of PSA simulations taken from .
The potential data arising from the future study.
INB The incremental net benefit, where uncertainty is induced by the model parameters.
INB The conditional expectation of the incremental net benefit conditional on .
The expected value of INB across all values of .
The variance of INB.
The variance of INB.
The posterior expectation of INB conditional on the potential datasets.
The posterior expectation of the incremental net benefit across potential future datasets with a given sample size of .
The variance of .
The variance of for potential datasets of a given sample size .
The number of nested simulations taken to estimate the EVSI using moment matching.
The -th row of a matrix that contains the quantiles of the PSA simulations for for .
The -th element in a vector of sample sizes between the minimum and maximum sample sizes for .
The -th simulated potential dataset for . Datasets are simulated conditional on only in the standard method and on the pair in the extension.
The posterior variance of the incremental net benefit conditional on the dataset for .
The simulated values of INB that have been rescaled to approximate simulations from the distribution of . The subscript defines the -th simulated value for .
The residual variance for Bayesian non-linear regression used to estimate .
Table 1: A list of the key notation used to demonstrate the moment matching method and the extension using non-linear regression.

4.3 Practical Considerations

The first practical consideration is that the function changes quickly for small sample sizes, meaning that smaller values of sizes give more information about . Therefore, we suggest that more future samples are generated for small sample sizes. Practically, this is achieved by choosing values that are evenly spaced on the square root scale between and . This has a computational advantage as Bayesian updating typically requires more computing time to estimate the posterior distribution conditional on larger datasets. Therefore, choosing in this manner improves fit and reduces computation time.

Next, this non-linear regression method requires that and are uncorrelated. This is because there is normally a relationship between and , through the simulated dataset . If and are also correlated then this can mask/inflate the relationship between and . Thus far, has been defined as a row containing the sample quantiles for each element of and as increasing values between and . This specification naturally induces a correlation between and which can lead to biased results. Therefore, it is suggested that each element of is randomly reordered separately before generating . Reordering each vector separately also improves accuracy by inducing a greater variation in the samples. Therefore, the simulation method for is represented in Figure 2, where the quantiles for are reordered separately for each column and then combined with ordered sample sizes, spaced on the square root scales, to generate one potential sample for each sample size.

Figure 2: A pictorial representation of the sampling procedure for the future samples in the extended moment matching method. As in Figure 1, darker lines represent higher values for the parameters.

Finally, as this EVSI estimation method is based on regression, its accuracy can be checked using standard model checking procedures such as residual or qq-plots. Specifically, the residuals should exhibit no clear unmodelled structure, as in standard regression modelling.

5 Implementation of the Extended Moment Matching Method

This section implements our extended moment matching method to calculate the EVSI across different sample sizes. The supplementary material demonstrates that our method is in line with other EVSI calculation methods [27, 35] using a hypothetical health economic model developed in [6]. To demonstrate the power of the extended moment matching method, we determine the optimal trial design for a practical health-economic model that evaluates different treatments for chronic pain [36]. In this analysis, we compare our method with the nested Monte Carlo simulation method to demonstrate accuracy and compare computation time.

5.1 A Health Economic Model for Chronic Pain

Sullivan et al. [36] developed a cost-effectiveness model for evaluating treatment options for chronic pain. The model is based on a Markov structure with 10 states, where each state has an associated utility score and cost. The disease progression for chronic pain is as follows: the treatment is administered and a patient can either experience adverse effects of the treatment (AE) or not. The patient can then withdraw from treatment, either due to the AE or otherwise. They can then continue to another treatment option or withdraw completely from treatment. After the second line of treatment, they either have subsequent treatment or discontinue (these are absorbing states).

In the standard model, a patient can either be offered morphine or a innovative treatment in the first line of treatment. If they withdraw and receive a second treatment, they are offered oxycodone, meaning the only difference between the two treatment arms occurs when the first treatment is administered. The innovative treatment is more effective and reduces the probability of AE but is more expensive.

The model uses a willingness-to-pay of , the threshold under which treatments are likely to receive a recommendation in the UK [30]. For a more in-depth presentation of all the model parameters, see [36]

. The PSA is based on gamma distributions for costs and beta distributions for probabilities and utilities. The parameters for these distributions are chosen such that the mean is informed by a literature review and the standard deviation is taken as 10% of the underlying mean estimate. For illustrative purposes, the per person lifetime EVSI is considered throughout this section. This assumes a discount factor of 0.03 over 15 years of the treatment being available.

5.2 Analysis for the Chronic Pain Example

A full VoI analysis, as set out by Tuffaha et al. [37]

, is undertaken for the chronic pain model. To begin, 100 000 simulations are taken from the prior for the model parameters. These are then fed through the Markov model to give 100 000 simulations for the incremental net benefit. Calculating the EVPPI

[8, 35, 19] based on these simulations indicated that the most valuable parameters are those relating to the utility of the different health states for the first line of treatment. Specifically, the utility of not having any AE from the treatment and withdrawing from the treatment without experiencing AE.

In response to this, we designed an experiment to learn about the utility of these two health states, which, if known with certainty, would account for about 79% of the uncertainty in the decision. Questionnaires are a standard method for determining QALYs and therefore we designed a trial in which questionnaires are sent to participants. We assume, in a simplistic manner, that a participant is sent the questionnaire if they withdrew from the first treatment without any AE. They are then questioned about both health states, which they can accurately recall. This means that each questionnaire directly updates information about both these key utility parameters.

Responses from these questionnaires will be translated into a utility score for each health state. These utility scores are modelled as independent beta distributions with their means conditional on the utility of interest, the utility of not having AE and withdrawing without AE, and standard deviations equal to 0.3 and 0.31 respectively [23]. We consider sample sizes between and . It is then assumed that only a proportion of the questionnaires are returned, leading to missingness in the data.

Within this context, two different designs are considered based on the results of a trial investigating whether financial incentives improve the rate of return of questionnaires in clinical trials [14]; we assume that the questionnaire could be accompanied by a incentive to complete. This incentive study demonstrated that sending the financial reward improved response rate from 68.7% to 75.7% while increasing the cost from to . The difference in cost is not equal to the incentive because non-respondents are chased up by telephone, which costs staff time and other resources and sending the reward reduces the number of chase-up calls needed.

Therefore, the EVSI will be used to answer two key questions regarding the design of this study; firstly, should the incentive be used? and secondly, how many questionnaires should be sent? The EVSI is estimated for both response rates using with 10 000 posterior simulations fed through the Markov model and used to estimate for each . The EVSI results using the moment matching method are compared with nested Monte Carlo simulations for sample sizes 10, 25, 50, 100 and 150. These nested MC estimators were calculated using 100 000 PSA simulations to generate the future samples and then 100 000 posterior simulations per PSA simulation. So, 1 billion simulations were used for each sample size to estimate the EVSI using nested Monte Carlo simulations as opposed to 500 000 simulations for our novel method.

5.3 Results for the Chronic Pain Model

Figure 3 displays the net economic value for both designs; the grey curves are the economic value of the study with no incentive and the black curves are the value of the incentive study from the moment matching method. The economic value of a study is the difference between the EVSI and its cost, from [14]. Clearly, the no incentive study is economically more valuable. This is because the reduced missingness in the questionnaire responses is not valuable enough to warrant the cost of the incentive. Note, however, that the dominance of the no incentive study is uncertain for very small sample sizes as the EVSI “distributions” overlap.

Figure 3 also shows the nested Monte Carlo estimates for the economic study value; the black crosses give the value for the no incentive study and the grey crosses for the incentive study. Using these estimates, we demonstrate that the moment matching method with non-linear regression is in line with the nested Monte Carlo simulation. This means that, in this example, our novel EVSI calculation method is accurate, despite the non-linear and non-Gaussian model structure of the health economic model, and significantly reduces the required number of simulations for this analysis.

Figure 3: The economic value (difference between EVSI and the cost of undertaking the study) of the two alternative questionnaire collection strategies for different sample sizes in the chronic pain example. The grey curves indicate the potential value of the study without incentives and the black curves the value of the study with incentives estimated using the moment matching method. The black crosses represent the no-incentive Monte Carlo simulation estimates and the grey crosses represent the same estimates for the incentive study.

Analysing the economic value for different sample sizes determines that the optimal sample size in terms of economic benefit is 16 for the incentive study and 27 for the no incentive study. Therefore, overall, the optimal study is a no incentive study with 27 participants; the information gained from any additional patients is not worth the cost of sending the questionnaire and chasing up the non-respondents.

The total computational time to calculate the EVSI for both studies using our novel calculation method is 331 seconds or around 5 and a half minutes on a standard desktop computer with a Intel i7 Pro processor using R version 3.2.1. This includes fitting the regression model to calculate the EVSI across different sample sizes for both study designs, which is around 5 seconds per study. On the other hand, the nested Monte Carlo simulations required between between 7 and 68 days for each EVSI estimate, with a total computation time of around 311 days for all 10 estimates.

6 Discussion

The EVSI has frequently been touted as a method for study design, including determining the optimal study size. However, its application in this area has been limited by the computational time required to obtain EVSI estimates across different designs. Several approximation methods have reduced the computational time per EVSI estimate to allow for this type of analysis. However, even with these calculation methods, the computational time increases linearly with the number of sample sizes considered.

Therefore, in this paper we present an extension to the moment matching method for EVSI calculation [21] that performs the analysis over different sample sizes with a minimal fixed additional computational cost. This implies that the cost of estimating the EVSI for a single design is approximately equal to the evaluation of the EVSI across different sample sizes. This method augments the moment matching method with Bayesian non-linear regression. This estimates the EVSI by calculating the posterior variance of the incremental net benefit for around 50 different sample sizes. In general, the posterior variance is easy to calculate provided a sampling distribution can be defined for the data and a Bayesian model can be specified for the model parameters.

The augmented moment matching method determined the optimal study design to update the information underpinning a real-life health economic model that evaluates interventions for the treatment of chronic pain. We also demonstrated that our novel method accurately estimates the EVSI by comparing with nested Monte Carlo simulations.

This method allows researchers to determine the optimal sample size for their trials using EVSI. The minimal computational time also allows them to compare a number of other design features. Finally, this method is implemented in the R package EVSI [16] making this method relatively simple to implement for general health economic models. The EVSI package also extends the methodology to calculate the EVSI across different net benefit functions at a minimal extra cost. Finally, the package also includes a suite of graphics and an online tool so EVSI results can be presented to key stakeholders [18].

Acknowledgments

Financial support for this study was provided in part by grants from Engineering and Physical Sciences Research Council (EPSRC) [Anna Heath] and Mapi [Dr. Gianluca Baio]. The funding agreement ensured the authors’ independence in designing the study, interpreting the data, writing, and publishing the report. The authors would also like to thank Nick Menzies for providing the simulations for the BK example included in the supplementary material.

References

  • [1] A. Ades, G. Lu, and K. Claxton. Expected Value of Sample Information Calculations in Medical Decision Modeling. Medical Decision Making, 24:207–227, 2004.
  • [2] L. Andronis and P. Barton. Adjusting estimates of the expected value of information for implementation: theoretical framework and practical application. Medical Decision Making, 36(3):296–307, 2016.
  • [3] G. Baio, A. Berardi, and A. Heath. BCEA: Bayesian Cost Effectiveness Analysis, 2016. R package version 2.2-3.
  • [4] G. Baio, A. Berardi, and A. Heath. Bayesian Cost Effectiveness Analysis with the R package BCEA. Springer, 2017.
  • [5] G. Baio and P. Dawid. Probabilistic sensitivity analysis in health economics. Statistical methods in medical research, Sep 2011.
  • [6] A. Brennan and S. Kharroubi. Efficient computation of partial expected value of sample information using bayesian approximation. Journal of health economics, 26(1):122–148, 2007.
  • [7] A. Brennan and S. Kharroubi. Expected value of sample information for weibull survival data. Health economics, 16(11):1205–1225, 2007.
  • [8] A. Brennan, S. Kharroubi, A. O’Hagan, and J. Chilcott. Calculating Partial Expected Value of Perfect Information via Monte Carlo Sampling Algorithms. Medical Decision Making, 27:448–470, 2007.
  • [9] Canadian Agency for Drugs and Technologies in Health. Guidelines for the economic evaluation of health technologies: Canada [3rd Edition]., 2006.
  • [10] S. Chow, J. Shao, H. Wang, and Y. Lokhnygina. Sample size calculations in clinical research. Chapman and Hall/CRC, 2017.
  • [11] K. Claxton, M. Sculpher, C. McCabe, A. Briggs, R. Akehurst, M. Buxton, J. Brazier, and A. O’Hagan. Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra. Health Economics, 14:339–347, 2005.
  • [12] Department of Health and Ageing. Guidelines for preparing submissions to the Pharmaceutical Benefits Advisory Committee: Version 4.3, 2008.
  • [13] EUnetHTA. Methods for health economic evaluations: A guideline based on current practices in Europe - second draft, 29th September 2014.
  • [14] S. Gates, M. Williams, E. Withers, E. Williamson, S. Mt-Isa, and S. Lamb. Does a monetary incentive improve the response to a postal questionnaire in a randomised controlled trial? the mint incentive study. Trials, 10(1):44, 2009.
  • [15] A. Gelman. Prior distributions for variance parameters in hierarchical models. Bayesian analysis, 1(3):515–534, 2006.
  • [16] A. Heath and G. Baio. Evsi: A suite of functions for the calculation and presentation of the evsi. GitHub repository, 2017.
  • [17] A. Heath and G. Baio. An efficient calculation method for the expected value of sample information: Can we do it? yes, we can. Value in Health, -(-):–, 2018.
  • [18] A. Heath, G. Baio, and R. Hunter. Development of a new software tool to compute the expected value of sample information: An application to the homehealth intervention. HESG2018, 2018.
  • [19] A. Heath, I. Manolopoulou, and G. Baio. Estimating the expected value of partial perfect information in health economic evaluations using integrated nested laplace approximation. Statistics in medicine, 2016.
  • [20] A. Heath, I. Manolopoulou, and G. Baio. A review of methods for analysis of the expected value of information. Medical Decision Making, 37(7):747–758, 2017.
  • [21] A. Heath, I. Manolopoulou, and G. Baio. Efficient monte carlo estimation of the expected value of sample information using moment matching. Medical Decision Making, 38(2):163–173, 2018.
  • [22] R. Howard. Information Value Theory. In IEEE Transactions on System Science and Cybernetics, (1) 22-26, 1966. SCC-2.
  • [23] R. Ikenberg, N. Hertel, Andrew M., M. Obradovic, G. Baxter, P. Conway, and H. Liedgens. Cost-effectiveness of tapentadol prolonged release compared with oxycodone controlled release in the uk in patients with severe non-malignant chronic pain who failed 1st line treatment with morphine. Journal of medical economics, 15(4):724–736, 2012.
  • [24] H. Jalal and F. Alarid-Escudero. A gaussian approximation approach for value of information analysis. Medical Decision Making, 38(2):174–188, 2018.
  • [25] H. Jalal, J. Goldhaber-Fiebert, and K. Kuntz. Computing expected value of partial sample information from probabilistic sensitivity analysis using linear regression metamodeling. Medical Decision Making, 35(5):584–595, 2015.
  • [26] C. McKenna and K. Claxton. Addressing adoption and research design decisions simultaneously: the role of value of sample information analysis. Medical Decision Making, 31(6):853–865, 2011.
  • [27] N. Menzies. An efficient estimator for the expected value of sample information. Medical Decision Making, 36(3):308–320, 2016.
  • [28] A. Miquel-Cases, V. Retèl, W. van Harten, and L. Steuten. Decisions on further research for predictive biomarkers of high-dose alkylating chemotherapy in triple-negative breast cancer: a value of information analysis. Value in health, 19(4):419–430, 2016.
  • [29] P. Müller and G. Parmigiani. Optimal design via curve fitting of monte carlo experiments. Journal of the American Statistical Association, 90(432):1322–1330, 1995.
  • [30] National Institute of Health and Care Excellence. Guide to the methods of technology appraisal 2013, 2013.
  • [31] A. Pandor, P. Thokala, S. Goodacre, E. Poku, J. Stevens, S. Ren, A. Cantrell, G. Perkins, M. Ward, and J. Penn-Ashman. Pre-hospital non-invasive ventilation for acute respiratory failure: a systematic review and cost-effectiveness evaluation. Health Technology Assessment, 19(42):1–102, 2015.
  • [32] H. Raiffa and H. Schlaifer. Applied Statistical Decision Theory. Harvard University Press, Boston, MA, 1961.
  • [33] L. Steuten, G. van de Wetering, K. Groothuis-Oudshoorn, and V. Retèl. A systematic and critical review of the evolving methods and applications of value of information in academia and practice. Pharmacoeconomics, 31(1):25–48, 2013.
  • [34] M. Strong. Partial EVPPI Functions. http://www.shef.ac.uk/polopoly_fs/1.305039!/file/R_functions.txt, 2012.
  • [35] M. Strong, J. Oakley, A. Brennan, and P. Breeze. Estimating the expected value of sample information using the probabilistic sensitivity analysis sample a fast nonparametric regression-based method. Medical Decision Making, page 0272989X15575286, 2015.
  • [36] W. Sullivan, M. Hirst, S. Beard, D. Gladwell, F. Fagnani, J. Bastida, Cl Phillips, and W. Dunlop. Economic evaluation in chronic pain: a systematic review and de novo flexible economic model. The European Journal of Health Economics, 17(6):755–770, 2016.
  • [37] H. Tuffaha, L. Gordon, and P. Scuffham. Value of information analysis informing adoption and research decisions in a portfolio of health care interventions. MDM Policy & Practice, 1(1):2381468316642238, 2016.
  • [38] N. Welton, J. Madan, and A. Ades. Are head-to-head trials of biologics needed? the role of value of information methods in arthritis research. Rheumatology, 50(suppl_4):iv19–iv25, 2011.
  • [39] N. Welton, J. Madan, D. Caldwell, T. Peters, and A. Ades. Expected value of sample information for multi-arm cluster randomized trials with binary outcomes. Medical Decision Making, 34(3):352–365, 2014.
  • [40] A. Willan and E. Pinto. The value of information and optimal clinical trial design. Statistics in medicine, 24(12):1791–1806, 2005.

Appendix A Multi-Decision Models

In general, there is little theoretical difference between the moment matching method for multi-decision models and the moment matching method in the dual-decision setting that has been presented in the paper. The method still proceeds by calculating the posterior variance of the incremental net benefit across different potential simulated datasets. These calculated variances are then used to rescale the expectation of INB. However, as we have more than two decisions, the incremental net benefit is defined in a slightly different manner.

In the multi-decision setting, we choose a reference treatment, say , and then calculate the incremental net benefit of all the treatments with respect to this reference treatment, i.e.

Therefore, the incremental net benefit INB can be thought of as a multivariate random vector:

(5)

This means that the variance of INB is a variance-covariance matrix rather than a scalar value and so we denote it . If the EVSI calculation is being performed in R, then the variance-covariance matrix is computed when using the var() function so this extension adds no complexity to the moment matching procedure.

To find INB using non-parametric regression, a regression curve should be fitted for incremental net benefit to give a multivariate vector:

(6)

with a variance covariance matrix denoted . The standard moment matching method would proceed as normal, i.e. for each potential sample , we would calculate the variance of INB, the only difference in the multi-decision setting is that the variance is a variance-covariance matrix which we denote .

In the multi-decision setting, as before, we are aiming to estimate the distribution of . However, this is now a multivariate distribution with a variance-covariance matrix, denoted . Each element of this matrix, for and , is calculated in a similar manner as the dual decision setting,

where is the element of the -th row and -th column of the matrix and is the same element of the matrices. This is therefore, the same formula as the dual-decision setting but separately for each element of the variance-covariance matrix.

To rescale the distribution of INB, we use the same formula as in the standard method, but, rather than dividing by the standard deviation we must multiply by the inverse matrix square root. Matrix square roots and inverses are well-defined and can easily be found in R using the expm package. Therefore, to rescale the simulated PSA vectors for INB, denoted INB, , we use the following formula:

The EVSI is then calculated by taking the row-wise maximum of each of the vectors and 0 and then taking the mean of these maximums.

Using non-linear regression to calculate the EVSI in multi-decision problems involves an extension to the standard method. The non-linear model defined in the main paper is a scalar function and we must estimate a variance-covariance matrix across different sample sizes. Therefore, we must extend this regression model to a matrix function. In practice, we suggest that the non-linear regression function is extended by fitting the non-linear model from main paper

separately for each unique element of the variance-covariance matrix. Essentially, this involves estimating the element in the -th row and -th column of the variance matrix for a sample size , denoted as

where is the -th, -th element of the matrix. It is possible to demonstrate in a decision model with three treatment options that these functions approximately estimate the variance of both incremental net benefits and their covariance in normal-normal conjugate settings.

As the covariance matrix is symmetric, and so we fit

regression models separately to calculate the EVSI across different sample sizes, where is the number of possible interventions. Each curve will produce an estimate for the parameter and these distributions can be combined to find the posterior distribution of the variance-covariance matrix for for each sample size under consideration.

Finally, recall that the distribution of variance-covariance matrix for was approximated by a low-dimensional summary of the distribution of the posterior distribution for in the main paper. Clearly, this is more challenging when multiple curves are being fitted. Nonetheless, we suggest that posterior is summarised by finding a low-dimensional summary and then these are then used to create a small number of variance-covariance matrices. These matrices are used to rescale are then used to rescale INB using the formula above. The EVSI is finally calculated with the same method in the standard moment matching method. This method does not give credible intervals for the EVSI but computationally efficient way to calculate a low-dimensional summary of the possible EVSI values.

Appendix B Calculating the EVSI using Moment Matching for the Brennan and Kharroubi Example

This model has frequently been used to assess calculation methods for the EVSI. It was first developed by Brennan and Kharroubi [6] and modified by Menzies [27] to compare two treatments used to treat a hypothetical disease. For each drug, a patient can respond to the treatment, experience side effects or visit hospital for a certain length of time. A utility value is assigned to each of these possible outcomes and costs are associated with the drugs and hospital stays.

All the parameters are assumed to be normal with the mean and standard deviation given in Table 2. The studies are also assumed to have normal distributions, with the standard deviations given in Table 2. In this example, it is assumed that and are correlated with correlation coefficient 0.6 and the parameters and are also correlated with a correlation coefficient 0.6 and independent of the other set of parameters.

Mean Standard Deviation (SD) Data SD
Parameter
Drug Cost () - -
Probability of Hospitalisation () 0.1 0.08 0.02 0.02 - -
Days in Hospital () 5.2 6.1 1 1 - -
Hospital Cost per Day () - -
Probability of Responding () 0.7 0.8 0.1 0.1 0.2 0.2
Utility Change due to Response () 0.3 0.3 0.1 0.05 0.2 0.2
Duration of Response (years) () 3 3 0.5 1 1 2
Probability of Side Effects () 0.25 0.2 0.1 0.05 - -
Utility Change due to Side Effects () -0.1 -0.1 0.02 0.02 - -
Duration of Side Effects (years) () 0.5 0.5 0.2 0.2 - -
Table 2: The parameters for the Brennan and Kharroubi example. The mean and standard deviations for the distributions of the parameters is also given, along with the standard deviation of the data collection exercise aimed at reducing uncertainty in that parameter

The net benefits for each treatment are calculated as a deterministic function of these parameters

with . Five alternative data collection exercises are proposed by Menzies and are also considered in this exploration:

  1. A clinical trial collecting information on the probability that a patient responds to the two treatment options which informs parameters and .

  2. A study looking at the utility improvement for responding to the different treatments which informs parameters and .

  3. A study investigating the duration of response to the therapy (for those who do respond), informing parameters and .

  4. A study combining the first two studies, i.e. informing and .

  5. A study combining all the previous studies and therefore informing and .

b.1 Analysis for the BK example

To estimate the EVSI for different sample sizes using the moment matching method, the PSA distribution for the incremental net benefit is estimated using 1 million simulations from the parameter distributions. This implies that and , the variance and mean of the incremental net benefit respectively, are estimated using this full sample. are also found using these 1 million simulations, expect for exercise 5 which is based on 6 underlying parameters meaning that the computational demands of estimating was too high. These fitted values are, therefore, based on 20 000 simulations and obtained using the R package BCEA [3, 4].

In line with Menzies [27], sample sizes between and are considered for each of the different exercises outlined above. Throughout the analysis, we set which implies that 10 000 simulations are taken from 50 different posterior distributions to calculate the variance of the posterior incremental net benefit for 50 different sample sizes. The distribution for the EVSI is then determined using the method described in §4.2 in the main paper.

The results determined using our method are compared with the nested Monte Carlo approach for calculating the EVSI and Menzies’ approach which also reweights the PSA simulations for the INB but with an alternative method. These results are taken directly from Menzies [27] and are the most accurate estimates available. The conventional approach required 1 billion model evaluations per sample size compared with 500 000 model evaluations for the moment matching method to estimate the EVSI across the different sample sizes.

b.2 Results for the BK example

Figure 4 shows the EVSI estimates for the BK example. The solid line gives the EVSI calculated with the median of the posterior distribution of , whereas the dashed line is the 75% credible interval and the dotted line the 95% credible interval. The EVSI estimates from the nested Monte Carlo estimator and the Menzies estimator are given by the red dots and the blue crosses respectively. The nested Monte Carlo estimator (representative of the “true” EVSI) is within the 95% credible interval for all exercises except exercise 5 (bottom), where the EVSI is slightly over estimated for small values of . This small over-estimation may be due to the inaccuracies introduced by estimating INB using only 20 000 observations, as opposed to the full PSA simulation used for the other examples.

Figure 4: The EVSI estimate calculated with the posterior median of (solid line) along with 75% (dashed line) and 95% (solid line) posterior credible intervals for the BK example; Study 1 (Top Left), Study 2 (Top Right), Study 3 (Middle Left), Study 4 (Middle Right) and Study 5 (Bottom). These are compared with the nested Monte Carlo estimates (red dots) and the Menzies estimates (blue crosses). The EVPPI for the parameters targeted by the study is shown as the horizontal red line.

Figure 4 demonstrates that the EVSI is estimated with more relative precision as the EVSI estimate increases. This is because, for small values of the EVSI, the difference between and is small and the estimate of the two variances needs to be very accurate in order to estimate the difference. Therefore, this method for the EVSI calculation should be reserved for situations where the underlying parameters have significant value. If the EVSI estimate is too variable to aid decision making, as seen by the confidence bands, then more simulation should be undertaken. In general, extra simulations should be gained by increasing , provided the number of posterior simulations is sufficient to characterise the distribution of the “posterior” incremental net benefit [17].

Appendix C Non-Linear Regression Code

The following function calculates the EVSI across sample size using the non-linear regression method presented in this paper. It also allows users to plot the confidence bands for the EVSI as suggested in the paper when plot=TRUE.

calc.evsi.Nk](pmax(sigma.X,0))-max(mean(sigma.X),0) } } if(plot==TRUE){ #Calculate EVPPI EVPPI

Appendix D Code for the Ades et al. Example

To give an example of the variance calculation, R code is given below to calculate the vector for Exercise 1 in the Ades et al. example. The final line uses the function above to calculate the EVSI using the estimated values. The total computation time for this code is around 7 minutes.

##Ades et al. Model
library(R2OpenBUGS)
library(R2jags)
library(mgcv)
library(MASS)
Adesetal.model<-function(){
  for(i in 1:N){
    X[,i]~dmnorm(theta.5.cor[1:2],Tau.5.1[1:2,1:2])
  }

  theta.1~dnorm(10000,1/10^2)
  theta.11~dnorm(15000,1/10^2)
  theta.2~dnorm(0.1,1/.02^2)
  theta.12~dnorm(.08,1/.02^2)
  theta.3~dnorm(5.2,1/1^2)
  theta.13~dnorm(6.1,1/1^2)
  theta.4~dnorm(4000,1/2000^2)
  theta.8~dnorm(.25,1/.1^2)
  theta.17~dnorm(.20,1/.05^2)
  theta.9~dnorm(-.1,1/0.02^2)
  theta.18~dnorm(-.1,1/0.02^2)
  theta.10~dnorm(.5,1/0.2^2)
  theta.19~dnorm(.5,1/0.2^2)

  #To give correlated parameters, use multivariate normal dist
  #theta.5,theta.14,theta.7,theta.16
  theta.5.cor[1:4]~dmnorm(mu[],Tau.5[,])
  #theta.6,theta.15
  theta.6.cor[1:2]~dmnorm(mu.6[],Tau.6[,])

  #Health Economic Model - effects and costs
  e[1]<-(theta.5.cor[1]*theta.6.cor[1]*theta.5.cor[3])+(theta.8*theta.9*theta.10)
  e[2]<-(theta.5.cor[2]*theta.6.cor[2]*theta.5.cor[4])+(theta.17*theta.18*theta.19)

  c[2]<-(theta.11+theta.12*theta.13*theta.4)
  c[1]<-(theta.1+theta.2*theta.3*theta.4)

  #Net Benefit with willingness to pay of 100000
  NB[1]<-100000*e[1]-c[1]
  NB[2]<-100000*e[2]-c[2]
}

#Write Model File
filein.model <- "Adesetal.txt"
write.model(Adesetal.model,filein.model)

#Precision Matrices
sig.5<-0.1
sig.14<-0.1
sig.7<-0.5
sig.16<-1
rho<-0.6

S.5<-matrix(c(sig.5^2,rho*sig.14*sig.5,rho*sig.5*sig.7,rho*sig.5*sig.16,
              rho*sig.5*sig.14,sig.14^2,rho*sig.14*sig.7,rho*sig.14*sig.16,
              rho*sig.5*sig.7,sig.14*rho*sig.7,sig.7^2,rho*sig.16*sig.7,
              rho*sig.5*sig.16,sig.14*rho*sig.16,rho*sig.7*sig.16,sig.16^2),
            nrow=4)
sig.6<-0.1
sig.15<-0.05

S.6<-matrix(c(sig.6^2,rho*sig.6*sig.15,
              rho*sig.6*sig.15,sig.15^2),
            nrow=2)
Tau.5<-solve(S.5)
Tau.6<-solve(S.6)

###Data Precision Matrices
sig.5.X<-0.2
sig.14.X<-0.2
sig.7.X<-1
sig.16.X<-2

S.5.X<-matrix(c(sig.5.X^2,rho*sig.5.X*sig.14.X,rho*sig.5.X*sig.7.X,rho*sig.5.X*sig.16.X,
                rho*sig.5.X*sig.14.X,sig.14.X^2,rho*sig.14.X*sig.7.X,rho*sig.14.X*sig.16.X,
                rho*sig.5.X*sig.7.X,rho*sig.7.X*sig.14.X,sig.7.X^2,rho*sig.7.X*sig.16.X,
                sig.5.X*sig.16.X*rho,rho*sig.16.X*sig.14.X,rho*sig.16.X*sig.7.X,sig.16.X^2),
              nrow=4)
Tau.5.1<-solve(S.5.X[1:2,1:2])

#Data to simulate without any data
data<-list(mu=c(.7,.8,3,3),
           Tau.5=Tau.5,
           Tau.5.1=Tau.5.1,
           mu.6=c(.3,.3),
           Tau.6=Tau.6,
           X=NA,
           N=0)

#Global MCMC parameters
n.chains<-3        #Number of chains for MCMC
n.burnin <- 10000  # Number of burn in iterations
n.thin<-20         #Thinning
#1 million simulations without the data
n.iter <- ceiling(1000000*n.thin/n.chains) + n.burnin

# Choose the parameters in the model to monitor
parameters.to.save <- c("theta.5.cor","NB")

#Use jags to sample from the parameters and find INB.theta
model.theta <- jags(
  data =  data,
  parameters.to.save = parameters.to.save,
  model.file = filein.model,
  n.chains = n.chains,
  n.iter = n.iter,
  n.thin = n.thin,
  n.burnin = n.burnin,DIC=F)

#Calculate INB.theta
INB.theta<-model.theta$BUGSoutput$sims.list$NB[,2]-model.theta$BUGSoutput$sims.list$NB[,1]
var(INB.theta)
#Generate N.q and phi.q
Q<-50
N.min<-5
N.max<-200
#Spaced in the sqrt.
N.q<-trunc(seq(sqrt(N.min),sqrt(N.max),length.out=Q)^2)

#phi.q is two dimensional
theta.5<-model.theta$BUGSoutput$sims.list$theta.5.cor[,1]
theta.5.q<-quantile(theta.5,probs=(1:Q/(Q+1)))
theta.14<-model.theta$BUGSoutput$sims.list$theta.5.cor[,2]
theta.14.q<-quantile(theta.14,probs=(1:Q/(Q+1)))

#Reorder phi.q randomly
while(abs(cor(N.q,theta.5.q))>0.001){theta.5.q<-sample(theta.5.q,50,replace=F)}
while(abs(cor(N.q,theta.14.q))>0.001){theta.14.q<-sample(theta.14.q,50,replace=F)}

#PHI FOR THIS EXAMPLE
phi.q<-cbind(theta.5.q,theta.14.q)

sigma.q<-array()
for(i in 1:Q){
  #For this loop set the sample size and phi value for generating the data.
  N<-N.q[i]
  mu<-phi.q[i,]
  #Generate X conditional on the specific N.q and phi.q
  X<-mvrnorm(N,mu,S.5.X[1:2,1:2])
  #For use as data X must be a column matrix
  X<-t(as.matrix(X))

  #Use X as data in the model
  data<-list(mu=c(.7,.8,3,3),
             Tau.5=Tau.5,
             Tau.5.1=Tau.5.1,
             mu.6=c(.3,.3),
             Tau.6=Tau.6,
             X=X,
             N=N)

  #Moniter the Net Benefit to calculate its variance
  parameters.to.save <- c("NB")

  #Use fewer simulations to estimate the posterior variance.
  n.iter <- ceiling(10000*n.thin/n.chains) + n.burnin

  #Use jags to perform MCMC
  model.posterior <- jags(
    data =  data,
    parameters.to.save = parameters.to.save,
    model.file = filein.model,
    n.chains = n.chains,
    n.iter = n.iter,
    n.thin = n.thin,
    n.burnin = n.burnin,DIC=F)
  #Find posterior distribution for the INB
  INB.X<-model.posterior$BUGSoutput$sims.list$NB[,2]-model.posterior$BUGSoutput$sims.list$NB[,1]
  #Calculate the posterior variance
  sigma.q[i]<-var(INB.X)
}

#Find INB.phi
INB.phi<-gam(INB.theta~te(theta.5,theta.14))$fitted.values

####Calculate the EVSI####
EVSI_theta5_theta14<-calc.evsi.N(INB.phi,INB.theta,sigma.q,N.q)