Computer models or simulators are increasingly becoming popular for gaining insights of the physical processes and phenomena that are too expensive or infeasible to observe. For example, Greenberg (1979) developed a finite volume community ocean model (FVCOM) for simulating the flow of water in the Bay of Fundy; Bower et al. (2006) discussed the formation of galaxies using a simulator called GALFORM; and Bayarri et al. (2009) used a simulator called TITAN2D for modelling the maximum volcanic eruption flow height. Realistic computer simulators of complex processes can also be computationally expensive to run, and thus statistical surrogates trained on a handful of simulator runs are often used for the deeper understanding of the underlying phenomena. Sacks et al. (1989) proposed using a realization of the Gaussian process (GP) model as a surrogate for such processes.
The popular objectives of such computer experiments include global fitting, variable screening, and estimation of process features like the maximum, a pre-specified contour or a tail quantile region. Assuming the simulator under consideration is expensive to run, the number of simulator runs would be limited and thus one must be careful in choosing the inputs. Over the last two decades several innovative methodologies and algorithms have been developed to address some of the concerns. See Santner, Williams and Notz (2003), Fang, Li and Sudjianto (2005) and Rasmussen and Williams (2006) for details.
We focus on efficient designs for global fitting. In computer experiments literature, a popular technique is to use Latin hypercube designs (McKay et al., 1979) with some space-filling properties like maximin interpoint distance (Johnson et al., 1990, Morris and Mitchell, 1995), minimum pairwise coordinate correlation (Iman and Conover, 1982, Joseph and Hung, 2008), orthogonal array-based structure (Owen, 1992, Tang, 1993), projection property (Joseph, Gul and Ba, 2015), etc. Such designs aim at filling the input space as evenly as possible, but do not consider the complexity of the response surface. On the other hand, D-optimal designs (Johnson et al., 1990), integrated mean squared prediction error (IMSPE)-optimal designs (Sacks, Schiller and Welch, 1989) and maximum mean squared prediction error (MMSPE)-optimal designs (Sacks and Schiller, 1988) use the process response information in finding a design for global fitting.
Most of these designs follow one-shot approach, i.e., all design points are obtained at the same time. However, over the past decade, a few sequential designs have also been proposed for global fitting of the response surface that have higher prediction accuracy. For instance, the D-optimal design (Gramacy and Lee, 2009), expected improvement (EI) criterion based design (Lam and Notz, 2008) and minimum potential energy based design (Joseph et al., 2015). In this paper, we propose two new sequential design approaches for global fitting.
The main idea behind our proposed approaches comes from the fact that the estimation of a response surface can be approximated by the estimation of a large number of contours over the range of the responses. This further motivated us to generalize the contour estimation idea (Ranjan et al., 2008) for our objective. In this paper, we propose two generalizations. First, we recommend splitting the range of simulator outputs into equi-spaced contours and then develop a new EI criterion for the simultaneous estimation of these pre-specified multiple contours. Second, we propose a new adaptive approach of choosing contour levels for selecting the follow-up trial by maximizing the EI criterion for contour estimation. The performance of the proposed approaches have been compared with several state of the art designs for global fitting.
The remainder of the article is organized as follows. Section 2 presents a quick review of the Gaussian process (GP) model for building a surrogate of the computer model output, popular sequential design approaches for global fitting (Lam and Notz, 2008, Joseph et al., 2015) and the EI criterion for contour estimation (Ranjan et al., 2008). Section 3 presents the new multiple contours estimation-based EI method for constructing designs for global fitting of the response surface. In Section 4, we propose the new adaptive method of estimating the contour levels for choosing follow-up design points in the sequential framework. The performance comparison of the proposed methods and the existing approaches are discussed in Section 5. Finally, Section 6 summarizes the key findings and concluding remarks.
2 Background Review
This section reviews the necessary background and the existing relevant work for later development. More specifically, we provide a brief account of reviews on Gaussian process models used throughout, the existing sequential design approaches for global fitting as well as the contour estimation in Ranjan et al. (2008). Although these topics can easily be accessed in the literature, we include them here so that this article is a standalone document.
2.1 Gaussian Process Models
Gaussian process models are most widely used in computer experiments to emulate outputs from computer codes (e.g., Sacks et al., 1989). Its popularity is due to its simplicity, flexibility and the ability of providing the predictive uncertainty. Here we cover the key concepts of GP models and refer the reader to Santner, Williams and Notz (2003) and Rasmussen and Williams (2006) for details. For a training data of size , let the th input and output of a computer code be a
-dimensional vectorand a scalar , for . Typically, without the loss of generality, the design domain is assumed to be a unit hypercube, . A GP model assumes
where f is a vector of regression functions, is the vector of regression parameters,
is a stationary stochastic process with mean zero, constant variance, and the correlation between two outputs and being denoted by . In this article, we focus on the Gaussian process models with a constant mean, that is, . Let be the vector of responses for the training data and R be an spatial correlation matrix with the th element . A GP model in (1) is equivalent to assume that y
follows a multivariate normal distribution with mean vectorand the covariance matrix with , where is an -dimensional column vector of all 1’s. Notationally, we denote . There are many choices of valid correlation functions. One popular choice is the Gaussian correlation function,
where is the correlation parameter for the th input variable. The unknown parameters in the model include the mean , the variance , and correlation parameters
. They can be estimated via the maximum likelihood approach or Bayesian approach such as Markov Chain Monte Carlo (MCMC) (Santner et al., 2003, Fang et al., 2005, Currin et al., 1988, Linkletter et al., 2006). For the maximum likelihood approach, if the correlation parameters are known, the estimates ofand in (1) are
The best linear unbiased predictor (BLUP) at an input is given by
where . Moreover, the predictive variance of is
In practice, the unknown correlation parameters in (3) and (4) are replaced with the estimates. Thus, , , R and in (5) and (6) are replaced by , , and , respectively. There are a number of R packages that can provide the GP model fitting. They include mlegp, GPfit, DiceKriging, tgp, RobustGaSP and SAVE (Dancik, 2018, MacDonald, Ranjan and Chipman, 2015, Roustant, Ginsbourger and Deville, 2018, Gramacy and Taddy, 2016, Gu, Palomo and Berger, 2018, Palomo, Paulo and Garcia-Donato, 2015). These R packages are different in terms of computational efficiency and stability. In general they shall provide similar results. For the reason of stability, we use the R package GPfit (MacDonald et al., 2015) in this article.
2.2 Existing Sequential Design Approaches for Global Fitting
The general setup of a sequential design approach starts with an initial design and adds one point or a batch of points at-a-time sequentially. We focus on the sequential approaches of adding one point at-a-time. The next follow-up point shall be chosen based on the information gathered from the existing data and shall be most informative among the candidate points. The process of adding points is repeated until a tolerance based stopping criterion is met or a pre-specified budget is exhausted. The step-by-step process of the sequential design approach is as follows. 1.25cm
Choose an initial design of run size . Let .
Build a statistical surrogate model using the available data .
Choose the next design point based on a criterion. Run the computer code at the new input and obtain the corresponding response .
Let and repeat Steps 2 and 3 until it reaches the run size budget or satisfies the stopping criterion.
A few remarks are in order. First, the initial design typically comes with some space-filling property like maximin interpoint distance, minimum pairwise coordinate correlation, etc. If the initial run size is too small, the resulting surrogate model could be wildly inaccurate and mislead the follow-up design choice. On the other hand, if the is relatively large, it may not fully take the advantage of sequential design criterion in Step 3. The notion of expected improvement (EI) criterion has become extremely popular for choosing follow-up design points after Jones et al. (1998) developed an EI criterion for finding the global minimum of a computer simulator response. One recommendation, given by Ranjan et al. (2008), for the value of is of the ultimate run size budget. Such a recommendation is based on their sequential design approach for contour estimation. Second, the run size budget certainly depends on the computer code of interest. Loeppky et al. (2009) provided a rule of thumb for selecting a sample size, that is, 10 times the number of input variables. In our illustrative examples, the total runsize is at least . Third, in principle, any modelling methods such as GP, treed GP (TGP), or Bayesian additive regression trees (BART) (Gramacy and Lee, 2008, Chipman et al., 2012) can be used as a surrogate in Step 2. We focus on GP modelling in the examples.
2.2.1 Expected improvement criterion by Lam and Notz (2008)
Lam and Notz (2008) introduced a sequential design approach based on an expected improvement for global fit (EIGF) criterion which chooses the next input point that maximizes the expected improvement
where the improvement function is defined as
with being the observed output at the sampled point, , that is closest in distance to the candidate point x. They use the Euclidean distance to determine this nearest sampled design point. The expectation in (7) is taken with respect to the predictive distribution of under the GP model, i.e., . The EIGF criterion in (7) balances the local search and global search of the next potential design input that guides the search for the “informative” regions with significant variation in the response values.
2.2.2 Sequential minimum energy designs by Joseph et al. (2015)
Motivated by the fact in physics that the charged particles in a box repel and try to remain away from each other as much as possible, Joseph et al. (2015) viewed a space-filling design in the experimental region as the positions occupied by the charged particles in a box. The charge of each particle represents the experimental response. A minimum energy design is obtained by minimizing the potential energy. Let be the charge of the particle at the design input x and denote the Euclidean distance between the th and the th input. Joseph et al. (2015) defined the potential energy of a design as
where is in the range of . They further proposed a sequential minimum energy design approach which works as follows. Let , where is the dimensionality of the input x. Then the proposed one point at-a-time greedy algorithm finds the next follow-up design point given by
The design generated by this algorithm is called sequential minimum energy design (SMED).
2.3 Contour Estimation via EI Criterion
The contour at level “” of a simulator response surface consists of all the inputs x that yield the same response , that is,
Ranjan et al. (2008) developed an expected improvement criterion under the sequential design methodology for estimating a contour from an expensive to evaluate computer simulator with scalar responses. The proposed improvement function is,
where has a normal predictive distribution, i.e., , and for a positive constant . A suggested value for is 1.96 for the reason that this value defines a region of interest around
to be 95% confidence interval under the normality assumption of the responses. Lettingand , the closed form of the expectation of the improvement function with respect to the predictive distribution of is given by,
where , , and and
are the probability density function and the cumulative distribution function of a standard normal random variable, respectively. See Ranjan et al. (2008) and the associated Errata for the derivation of (12). The first term in (12) suggests an input with a large in the neighbourhood of the predicted contour, while the last term assigns the weights to points that are far away from the predicted contour with large uncertainties. The second term is often dominated by the other two terms in (12). Maximizing the EI criterion in (12) results in the inputs with high uncertainty near the predicted contour as well as those far away, achieving both aims of local search and global exploration.
3 Global Fitting by Estimating Multiple Contours
This section proposes a new method for constructing a sequential design for achieving higher prediction accuracy of the overall global fit. The basic sequential framework would remain the same as in Section 2.2, that is, start with a good initial design (e.g., maximin Latin hypercube) of size and then sequentially add the remaining points using some method that feeds on the objective of global fitting. Instead of the conventional approach of trying to evenly fill the input space, the proposed idea is to slice the response surface into multiple contours and then use the sequential design approach to simultaneously estimate those contours. Next, we generalize the EI criterion for contour estimation (Ranjan et al., 2008) for simultaneous estimation of multiple contours.
For a given integer and the set of scalar values , suppose that we are interested in estimating contours , where represents the range of the true simulator response, and is defined in (10). Without loss of generality, assume . For choosing the follow-up trial, we propose the improvement function at input x as
where and for some positive constant . This improvement function will be non-zero only if for some . Therefore, the improvement function can be re-written as:
Since , the improvement function can be further simplified as
defines an uncertainty band around each contour that is a function of the predictive standard deviation. For the design points already chosen, the radius of the band is exactly zero. In addition, the criterion will tend to be large for the samples from the sets , where is large.
Similar to other sequential design approaches, we suggest choosing follow-up design points by maximizing the corresponding expected improvement, where the expectation is taken with respect to the predictive distribution, . For , let ’s and ’s be defined as follows,
Then, the expectation of the improvement function in (13) is simply the sum of the individual contour estimation EI criterion of Ranjan et al. (2008) over cases, i.e.,
where and , and are the probability density function and the cumulative distribution function of a standard normal random variable, respectively. The formulation in (3) reduces to (12) when the number of contour levels is .
Note that the maximization of over has two advantages. First, the true value of (and hence ) is unknown for any unsampled design point. Second, some regions of the design space may not have been sufficiently explored yet and the predictive variance of is relatively high. For such an unsampled design point, even though the predicted response is not within the -band of one of those contours, the true contours may lie in the unexplored region. As a result, the EI approach facilitates a balance between the local exploitation versus global exploration.
A naive way to choose - the number of contours, and contour levels, , for global fitting is to use equi-spaced contours in the simulator output range , and finding their optimal values appear to be a challenging task. We now present two illustrations of the proposed multiple contour estimation EI criterion (referred to as MC criterion) for global fitting with different values of .
Example 1. Consider the computer model (Gramacy and Lee, 2012) that relates the one-dimensional input and the output as,
The true relationship between the input and the output is displayed in the blue solid curve in Figure 1. Five initial design points are shown by black empty circles. We then sequentially add 15 design points using the MC criterion in (3). The numerical labels represent the order of the newly added design points. Figures 1(a), 1(b), and 1(c) illustrate the sequential design scheme with the MC criterion for 5, 20, 50 equally spaced contour levels within the ranges of the fitted surface.
Clearly Figure 1 reveals that the proposed MC criterion-based sequential design approach can choose the inputs that are around the areas where the function changes the direction and locate most of the points in the areas where the computer model is more complex. The results show that the final design is in general space-filling but with some nearby points, for example, points 10 and 15, in Figure 1(c).
Example 2. Consider a computer model with two-dimensional input variables , and the output given by
Suppose a maximin Latin hypercube design of 10 points is generated and the corresponding responses are collected from the computer model. First, we consider searching for the next follow-up design point for estimating only one contour at the level . Figure 2 shows these 10 design points, the inputs with , and the maximizer of the EI criterion for contour estimation in (12) for the candidate set on a regular rectangular grid.
Next, we consider the simultaneous estimation of three contours at levels , and using the MC criterion in (3). Figure 3 shows the inputs from the same grid candidate set that achieve non-zero improvement (13), and the point that maximizes the MC criterion. This point in red in Figure 3 is in fact from the set of the points that yield non-zero improvement around the contour level .
Figure 4 illustrates the complete sequential design scheme with initial design points and follow-up design points for simultaneously estimating three contours at levels , and . The red points are the new follow-up points and the label corresponds to the order the point is added. The last panel displays the squared distance between the estimated contour and the true contour at each stage. It can be observed from Figure 4 that the estimated uncertainty bands around the three contours become narrower and more accurate. It can also be seen that more points are added to estimate the contour level than to estimate the other two contours. Some points such as the second and third points are away from the contour bands.
It is clear from the two examples that the resulting designs do not have the conventional space-filling property. This is desirable as the objective is an overall good fit of the response surface, and not to explore the input space. However, as illustrated in Figure 4, a significant fraction of design points tend to line up on the pre-specified contours, which could lead to biased designs if are not chosen appropriately. Next we propose an efficient method of selecting contour levels.
4 Sequential Estimation of Contours for Global Fitting
In this section we propose a new approach for choosing the follow-up design points. Different from the previous section where the simultaneous estimation of multiple contours was used for global fitting, we adopt the EI criterion for estimating only one contour level at each stage. More importantly, we propose using different contour levels at different stages. That is, we sequentially find the design points that maximize the criterion (12) with a different value of at each stage. The important issue is how to choose the contour level at each stage. We propose the following way to choose such contour level in an automatic way.
Suppose at stage , the training data are and the corresponding emulator gives the predictive distribution as for any input x. Let the candidate set for the next follow-up point be and
Then, we choose the contour level at stage as . In other words, at each stage, we set the contour level to be the fitted response that has maximum predictive variance. This is to encourage exploring the area with maximum uncertainty.
Example 2 (contd.) Consider finding a design for global fitting of the computer simulator in Example 2. The procedure starts with an initial design of size obtained via maximin Latin hypercube sampling and follow-up points are chosen as per the proposed sequential strategy. Figure 5 displays the follow-up design points found by the proposed method, i.e., the sequential contour estimation-based EI criterion.
Note that the resulting design is more randomized as compared to a systematic layout of points on the contour lines shown in Figure 4. Again, the design is not completely space-filling and it has some pairs of close-by points.
5 Simulated Examples
In this section, we conduct a simulation study to demonstrate the effectiveness of the proposed sequential design approaches. Specifically, we compare the proposed approaches with the following methods:
a one-shot maximin Latin hypercube design;
the sequential -optimal design in the R package tgp;
the sequential approach by Lam and Notz (2008);
the sequential minimum energy design in Joseph et al. (2015);
the proposed multiple contours estimation-based criterion in Section 3;
the proposed sequential contour estimation-based criterion in Section 4.
For approach (e), we use the 10 contour levels, that is, is set to be 10. These methods are denoted by ‘maximinLHD’, ‘tgp’,‘EIGF’, ‘SMED’, ‘MC10’, and ‘SCvar’.
Several criteria can be used to evaluate the performance of different design approaches in comparison. We adopt the root mean square prediction error (RMSPE) given by
where and are the predicted response and the true response at the new input x in the hold-out set . Another criterion we use is the maximum error provided by
For each example below, the initial design for sequential designs is a maximin Latin hypercube design of runs generated using the R package SLHD (Ba, 2015). The model fitting is implemented using the default setting of the function GP_fit in the R package GPfit. The test data is a random Latin hypercube design of points where is the number of input variables. The parameter in ‘MC10’ and ‘SCvar’ is set to be 2.
Example 3. We consider computer model with two input variables and ,
This model is known as Branin function (Dixon and Szego, 1978). We use initial design points. The total run size budget is 30. Figure 6 displays the boxplots of RMSPEs and maximum errors of the different design approaches over 50 simulations. The results show that in this example the one-shot approach ‘maximinLHD’ is the worst while the approaches ‘MC10’ and ‘SCvar’ are comparably better than the others.
Example 4. We consider computer model with three input variables , and ,
We use initial design points. The total run size budget is 60. Figure 7 displays the boxplots of RMSPEs and maximum errors of the different design approaches over 50 simulations. Again, here the one-shot approach ‘maximinLHD’ is the worst. The approaches ‘tgp’, ‘MC10’ and ‘SCvar’ are comparably better than the others in terms of RMSPE and the proposed approaches ‘MC10’ and ‘SCvar’ are significantly better than the others in terms of maximum errors.
Example 5. We consider computer model with four input variables , , and ,
We use initial design points. The total run size budget is 80. Figure 8 displays the boxplots of RMSPEs and maximum errors of the different design approaches over 50 simulations. Here, the approach ‘EIGF’ is the worst followed by the approach ‘maximinLHD’ in terms of both criteria. The performance of the other four approaches are similar based on RMSPE. However, based on the maximum error, the proposed approaches give more accurate predictions.
6 Concluding Remark
In this article we have developed two sequential design approaches for accurately predicting a complex computer code. The approaches are based on the expected improvement criteria for simultaneously or sequentially estimating contours. We used a Gaussian process (GP) model as a surrogate for the computer simulator, which is an integral component of the proposed criteria for identifying the follow-up trials. Numerical examples are given to demonstrate that the proposed approaches can significantly outperform the existing approaches.
Note that if some other surrogate is used instead of GP model, then also the key ideas like formulation of improvement function and sequential estimation of contour levels can be retained. Of course, the resultant expected improvement criteria would change, and in fact, one may not even end up with a closed form expression of the final design criterion for selecting follow-up points. Future work also include the application of the proposed contour estimation-based sequential design approaches for global fitting for computer experiments with both qualitative and quantitative factors (Deng et al., 2017) and dynamic computer experiments (Zhang, Lin and Ranjan, 2018).
Ba, S. (2015). SLHD: Maximin-Distance (Sliced) Latin Hypercube Designs. R package version 2.1-1.
Bayarri, M. J., Berger, J. O., Calder, E. S., Dalbey, K., Lunagomez, S., Patra, A. K., Pitman, E. B., Spiller, E. T., and Wolpert, R. L. (2009). “Using statistical and computer models to quantify volcanic hazards,” Technometrics, 51, 402–413.
Bower, R. G., Benson, A. J., et al. (2006). “The broken hierarchy of galaxy formation,” Monthly Notices of the Royal Astronomical Society, 370, 645–655.
Chipman, H., Ranjan, P., and Wang, W. (2012). “Sequential design for computer experiments with a flexible Bayesian additive model,”Canadian Journal of Statistics, 40(4), 663-678.
Dancik, G. (2018). mlegp: Maximum Likelihood Estimates of Gaussian Processes. R package version 3.1.7.
Deng, X., Lin, C.D., Liu, K.W., and Rowe, R.K. (2017). “Additive Gaussian process for computer models with qualitative and quantitative factors,” Technometrics, 59, 283-292.
Dixon, L. C. W. and Szego, G. P. (1978). “The global optimization problem: an introduction, ”Towards Global Optimization, 2, 1-15.
Fang, K. T., Li, R. and Sudjianto, A. (2005). Design and Modeling for Computer Experiments, New York: ChapmanHall/CRC Press.
Gramacy, R.B. and Lee, H.K.H. (2008). “Bayesian treed Gaussian process models with an application to computer modeling,”Journal of the American Statistical Association, 103(483), 1119–1130.
Gramacy, R.B. and Lee, H.K.H. (2009), “Adaptive design and analysis of supercomputer experiments,” Technometrics, 51(2), 130–145.
Gramacy, R. B. and Lee, H.K. (2012). “Cases for the nugget in modeling computer experiments.,”Statistics and Computing, 22(3), 713–722.
Gramacy, R. B. and Taddy, M.A. (2016). tgp: Bayesian Treed Gaussian Process Models. R package version 2-4-14.
Greenberg, D. (1979). “A numerical model investigation of tidal phenomena in the Bay of Fundy and Gulf of Maine,” Marine Geodesy, 2, 161–187.
Gu, M. Palomo, J., and Berger, J. (2018). RobustGaSP: Robust Gaussian Stochastic Process Emulation. R package version 0.5.6.
Han, G., Santner, T. J., Notz, W. I., and Bartel, D. L. (2009). “Prediction for computer experiments having quantitative and qualitative input variables,” Technometrics, 51, 278–288.
Iman, R. L. and Conover, W. J. (1982). “A distribution-free approach to inducing rank correlation among input variables,” Communication in Statistics Part B-Simulation and Computing, 11, 311–334.
Jesus, P., Garcia-Donato, G., Paulo, R., Berger, J., Bayarri, M., and Sacks, J. ( 2019). SAVE: Bayesian Emulation, Calibration and Validation of Computer Models. R package version 1.0.
Johnson, M., Moore, L. and Ylvisaker, D. (1990). “Minimax and maximin distance design,” Journal of Statistical Planning and Inference, 26, 131–148.
Jones, D. R., Schonlau, M., and Welch, W. J. (1998). “Efficient global optimization of expensive black-box functions, ”Journal of Global Optimization, 13(4), 455–492.
Joseph, V.R., Dasgupta, T., Tuo, R., and Wu, C.J. (2015). “Sequential exploration of complex surfaces using minimum energy designs,”Technometrics, 57, 64–74.
Joseph, V.R., Gul, E. and Ba, S., (2015). “Maximum projection designs for computer experiments,” Biometrika, 102, 371–380.
Joseph, V.R., and Hung, Y. (2008). “Orthogonal-maximin Latin hypercube designs,”Statistica Sinica, 18, 171–186.
Lam, C.Q. and Notz, W.I. (2008). “Sequential adaptive designs in computer experiments for response surface model fit,” Statistics and Applications, 6, 207–233.
Loeppky, J.L., Sacks, J., and Welch, W.J. (2009). “Choosing the sample size of a computer experiment: a practical guide,”Technometrics, 51(4), 366–376.
MacDonald B, Ranjan P, and Chipman H. (2015). GPfit: an R Package for Gaussian Process Model Fitting Using a New Optimization Algorithm. R package version 1.0-1.
McKay, M. D., Beckman, R. J., and Conover, W. J. (1979). “A comparison of three methods for selecting values of input variables in the analysis of output from a computer code,” Technometrics, 21, 239–45.
Morris, M. D. and Mitchell, T. J. (1995). “Exploratory designs for computer experiments, ” Journal of Statistical Planning and Inference, 43, 381–402.
Owen, A.B., (1992). “Orthogonal arrays for computer experiments, integration and visualization,” Statistica Sinica, 2, 439–452.
Palomo, J., Paulo, R., and Garcia-Donato, G., (2015). “SAVE: An r package for the statistical analysis of computer models”. Journal of Statistical Software, 64(13), 1–23.
Roustant, O., Ginsbourger, D., and Deville, Y. (2018). DiceKriging: Kriging Methods for Computer Experiments. R package version 1.5.6.
Ranjan, P., Bingham, D., and Michailidis, G. (2008). “Sequential experiment design for contour estimation from complex computer codes,”Technometrics, 50(4), 527–541. Errata (2011), Technometrics, 53:1, 109-110.
Rasmussen, C.E. and Williams., C.K.I. (2006).
Gaussian Processes for Machine Learning. The MIT Press, Cambridge, MA.
Santner, T.J., Williams, B. J., and Notz, W. I. (2003). The Design and Analysis of Computer Experiments. New York: Springer.
Sacks, J., Schiller, S. B., and Welch W. J. (1989). “Designs for computer experiments,” Technometrics, 31, 41–47.
Sacks, J. and Schiller, S. (1988). Spatial designs. In Statistical Decision Theory and Related Topics IV Vol. 2 (Gupta and Berger (eds.)), 385–399, Springer- Verlag, New York.
Sacks, J., Welch, W. J., Mitchell, T.J., and Wynn, H.P. (1989). “Design and analysis of computer experiments,” Statistical Science, 409–423.
Shewry, M.C. and Wynn, H.P. (1987). “Maximum entropy sampling,” Journal of Applied Statistics, 14(2), 165-170.
Tang, B. (1993). Orthogonal array-based Latin hypercubes. Journal of American Statistical Association, 88, 1392–1397.
Zhang, R., Lin, C.D., and Ranjan, P. (2018). “Local approximate Gaussian process model for large-scale dynamic computer experiments,” Journal of Computational and Graphical Statistics, 27, 798–807.