Abstract
We consider parallel global optimization of derivativefree expensivetoevaluate functions, and propose an efficient method based on stochastic approximation for implementing a conceptual Bayesian optimization algorithm proposed by Ginsbourger et al. (2007). To accomplish this, we use infinitessimal perturbation analysis (IPA) to construct a stochastic gradient estimator and show that this estimator is unbiased. We also show that the stochastic gradient ascent algorithm using the constructed gradient estimator converges to a stationary point of the qEI surface, and therefore, as the number of multiple starts of the gradient ascent algorithm and the number of steps for each start grow large, the onestep Bayes optimal set of points is recovered. We show in numerical experiments that our method for maximizing the qEI is faster than methods based on closedform evaluation using highdimensional integration, when considering many parallel function evaluations, and is comparable in speed when considering few. We also show that the resulting onestep Bayes optimal algorithm for parallel global optimization finds highquality solutions with fewer evaluations than a heuristic based on approximately maximizing the qEI. A highquality open source implementation of this algorithm is available in the open source Metrics Optimization Engine (MOE).
1 Introduction
We consider derivativefree global optimization of expensive functions, in which (1) our objective function is timeconsuming to evaluate, limiting the number of function evaluations we can perform; (2) evaluating the objective function provides only the value of the objective, and not the gradient or Hessian; and (3) we seek a global, rather than a local, optimum. Such problems arise when the objective function is evaluated by running a complex computer code (see, e.g., Sacks et al. 1989), performing a laboratory experiment, or building a prototype system to be evaluated in the real world. In this paper we assume our function evaluations are deterministic, i.e., free from noise.
Bayesian Global Optimization (BGO) methods are one class of methods for solving such problems. They were initially proposed by Kushner (1964), with early work pursued in Mockus et al. (1978) and Mockus (1989), and more recent work including improved algorithms (Boender and Kan 1987, Jones et al. 1998, Huang et al. 2006), convergence analysis (Calvin 1997, Calvin and Žilinskas 2002, Vazquez and Bect 2010), and allowing noisy function evaluations (Calvin et al. 2005, Villemonteix et al. 2009, Frazier et al. 2009, Huang et al. 2006).
The most wellknown BGO method is Efficient Global Optimization (EGO) from Jones et al. (1998), which chooses each point at which to evaluate the expensive objective function in the “outer” expensive global optimization problem by solving an “inner” optimization problem: maximize the “expected improvement”. Expected improvement is the value of information (Howard 1966) from a single function evaluation, and quantifies the benefit that this evaluation provides in terms of revealing a point with a better objective function value than previously known. If this is the final point that will be evaluated in the outer optimization problem, and if additional conditions are satisfied (the evaluations are free from noise, and the implementation decision, i.e., the solution that will be implemented in practice after the optimization is complete, is restricted to be a previously evaluated point), then the point with largest expected improvement is the Bayesoptimal point to evaluate, in the sense of providing the best possible averagecase performance in the outer expensive global optimization problem (Frazier and Wang 2016).
Solving EGO’s inner optimization problem is facilitated by an easytocompute and differentiate expression for the expected improvement in terms of the scalar normal cumulative distribution function. Fast evaluation of the expected improvement and its gradient make it possible in many applications to solve the inner optimization problem in significantly less time than the time required per evaluation of the expensive outer objective, which is critical to EGO’s usefulness as an optimization algorithm.
The inner optimization problem at the heart of EGO and its objective, the expected improvement, was generalized by Ginsbourger et al. (2007) to the parallel setting, in which the expensive objective can be evaluated at several points simultaneously. This generalization, called the “multipoints expected improvement” or the qEI, is consistent with the decisiontheoretic derivation of expected improvement and quantifies the expected utility that will result from the evaluation of a set of points. This work also provided an analytical formula for .
If this generalized inner optimization problem, which is to find the set of points to evaluate next that jointly maximize the qEI, could be solved efficiently, then this would provide the onestep Bayesoptimal set of points to evaluate in the outer problem, and would create a onestep Bayesoptimal algorithm for global optimization of expensive functions able to fully utilize parallelism.
This generalized inner optimization problem is challenging, however, because unlike the scalar expected improvement used by EGO, the qEI lacks an easytocompute and differentiate expression, and is calculable only through Monte Carlo simulation, highdimensional numerical integration, or expressions involving highdimensional multivariate normal cumulative distribution functions (CDFs). This significantly restricts the set of applications in which a naive implementation can solve the inner problem faster than a single evaluation of the outer optimization problem. Stymied by this difficulty, Ginsbourger et al. (2007) and later work (Chevalier and Ginsbourger 2013), propose heuristic methods that are motivated by the onestep optimal algorithm of evaluating the set of points that jointly maximize the qEI, but that do not actually achieve this gold standard.
Contributions
The main contribution of this work is to provide a method that solves the inner optimization problem of maximizing the qEI efficiently, creating a practical and broadly applicable onestep Bayesoptimal algorithm for parallel global optimization of expensive functions. To accomplish this we use infinitesimal perturbation analysis (IPA) (Ho 1987) to construct a stochastic gradient estimator of the gradient of the qEI
surface, and show that this estimator is unbiased, with a bounded second moment. Our method uses this estimator within a stochastic gradient ascent algorithm, which we show converges to the set of stationary points of the
qEI surface. We use multiple restarts to identify multiple stationary points, and then select the best stationary point found. As the number of restarts and the number of iterations of stochastic gradient ascent within each restart both grow large, the onestep optimal set of points to evaluate is recovered.Our method can be implemented in both synchronous environments, in which function evaluations are performed in batches and finish at the same time, and asynchronous ones, in which a function evaluation may finish before others are done.
In addition to our methodological contribution, we have developed a highquality open source software package, the “Metrics Optimization Engine (MOE)” (Clark et al. 2014), implementing our method for solving the inner optimization problem and the resulting algorithm for parallel global optimization of expensive functions. To further enhance computational speed, the implementation takes advantage of parallel computing and achieves 100X speedup over singlethreaded computation when deployed on a graphical processing unit (GPU). This software package has been used by Yelp and Netflix to solve global optimization problems arising in their businesses (Clark 2014, Amatriain 2014). For the rest of the paper, we refer to our method as “MOEqEI” because it is implemented in MOE.
We compare MOEqEI against several benchmark methods. We show that MOEqEI provides highquality solutions to the outer optimization problem using fewer function evaluations than the heuristic CLmix policy proposed by Chevalier and Ginsbourger (2013), which is motivated by the inner optimization problem. We also show that MOEqEI provides a substantial parallel speedup over the singlethreaded EGO algorithm, which is onestep optimal when parallel resources are unavailable. We also compare our simulationbased method for solving the inner optimization problem against methods based on exact evaluation of the qEI from Chevalier and Ginsbourger (2013) and Marmin et al. (2015) (discussed in more detail below) and show that our simulationbased approach to solving the inner optimization problem provides solutions to both the inner and outer optimization problem that are comparable in quality and speed when is small, and superior when is large.
Related Work
Developed independently and in parallel with our work is Chevalier and Ginsbourger (2013), which provides a closedform formula for computing qEI, and the book chapter Marmin et al. (2015), which provides a closedform expression for its gradient. Both require multiple calls to highdimensional multivariate normal CDFs. These expressions can be used within an existing continuous optimization algorithm to solve the inner optimization problem that we consider.
While attractive in that they provide closedform expressions, calculating these expressions when is even moderately large is slow and numerically challenging. This is because calculating the multivariate normal CDF in moderately large dimension is itself challenging, with state of the art methods relying on numerical integration or Monte Carlo sampling as described in Genz (1992). Indeed, the method for evaluating the qEI from Chevalier and Ginsbourger (2013) requires evaluations of the dimensional multivariate normal CDF, and the method for evaluating its gradient requires calls to multivariate normal CDFs with dimension ranging from to . In our numerical experiments, we demonstrate that our method for solving the inner optimization problem requires less computation time and parallelizes more easily than do these competing methods for , and performs comparably when is smaller. We also demonstrate that MOEqEI’s improved performance in the inner optimization problem for translates to improved performance in the outer optimization problem.
Other related work includes the previously proposed heuristic CLmix from Chevalier and Ginsbourger (2013), which does not solve the inner maximization of qEI, instead using an approximation. While solving the inner maximization of qEI as we do makes it more expensive to compute the set of points to evaluate next, we show in our numerical experiments that it results in a substantial savings in the number of evaluations required to find a point with a desired quality. When function evaluations are expensive, this results in a substantial reduction in overall time to reach an approximately optimal solution.
In other related work on parallel Bayesian optimization, Frazier et al. (2011) and Xie et al. (2016) proposed a Bayesian optimization algorithm that evaluates pairs of points in parallel, and is onestep Bayesoptimal in the noisy setting under the assumption that one can only observe noisy function values for single points, or noisy function value differences between pairs of points. This algorithm, however, is limited to evaluating pairs of points, and does not extend to a higher level of parallelism.
There are also other nonBayesian algorithms for derivativefree global optimization of expensive functions with parallel function evaluations from Dennis and Torczon (1991), Kennedy (2010) and Holland (1992). These are quite different in spirit from the algorithm we develop, not being derived from a decisiontheoretic foundation.
Outline
We begin in Section 2 by describing the mathematical setting in which Bayesian global optimization is performed, and then defining the qEI and the onestep optimal algorithm. We construct our stochastic gradient estimator in Section 3.2, and use it within stochastic gradient ascent to define a onestep optimal method for parallel Bayesian global optimization in Section 3.3. Then in Section 4.1 we show that the constructed gradient estimator of the qEI surface is unbiased under mild regularity conditions, and in Section 4.2 we provide convergence analysis of the stochastic gradient ascent algorithm. Finally, in Section 5 we present numerical experiments: we compare MOEqEI against previously proposed heuristics from the literature; we demonstrate that MOEqEI provides a speedup over singlethreaded EGO; we show that MOEqEI is more efficient than optimizing evaluations of the qEI using closedform formula provided in Chevalier and Ginsbourger (2013) when is large; and we show that MOEqEI computes the gradient of qEI faster than evaluating the closedform expression proposed in Marmin et al. (2015).
2 Problem formulation and background
In this section, we describe a decisiontheoretic approach to Bayesian global optimization in parallel computing environments, previously proposed by Ginsbourger et al. (2007). This approach was considered to be purely conceptual as it contains a difficulttosolve optimization subproblem (our socalled “inner” optimization problem). In this section, we present this inner optimization problem as background, and present a novel method in the subsequent section that solves it efficiently.
2.1 Bayesian Global Optimization
Bayesian global optimization considers optimization of a function with domain . The overarching goal is to find an approximate solution to
We suppose that evaluating is expensive or timeconsuming, and that these evaluations provide only the value of at the evaluated point and not its gradient or Hessian. We assume that the function defining the domain is easy to evaluate and that projections from into the nearest point in can be performed quickly.
Rather than focusing on asymptotic performance as the number of function evaluations grows large, we wish to find an algorithm that performs well, on average, given a limited budget of function evaluations. To formalize this, we model our prior beliefs on the function with a Bayesian prior distribution, and we suppose that was drawn at random by nature from this prior distribution, before any evaluations were performed. We then seek to develop an optimization algorithm that will perform well, on average, when applied to a function drawn at random in this way.
2.2 Gaussian process priors
For our Bayesian prior distribution on , we adopt a Gaussian process prior (see Rasmussen and Williams 2006), which is specified by its mean function and positive semidefinite covariance function . We write the Gaussian process as
Then for a collection of points , the prior of at is
(1) 
where and .
Our proposed method for choosing the points to evaluate next additionally requires that and satisfy some mild regularity assumptions discussed below, but otherwise adds no additional requirements. In practice, and are typically chosen using an empirical Bayes approach discussed in Brochu et al. (2010), in which first, a parameterized functional form for and is assumed; second, a first stage of data is collected in which is evaluated at points chosen according to a Latin hypercube or uniform design; and third, maximum likelihood estimates for the parameters specifying and are obtained. In some implementations (Jones et al. 1998, Snoek et al. 2012), these estimates are updated iteratively as more evaluations of are obtained, which provides more accurate inference and tends to reduce the number of function evaluations required to find good solutions but increases the computational overhead per evaluation. We adopt this method below in our numerical experiments in Section 5. However, the specific contribution of this paper, a new method for solving an optimization subproblem arising in the choice of design points, works with any choice of mean function and covariance matrix , as long as they satisfy the mild regularity conditions discussed below.
In addition to the prior distribution specified in (1), we may also have some previously observed function values , for . These might have been obtained through the previously mentioned first stage of sampling, running the second stage sampling method we are about to describe, or from some additional runs of the expensive objective function performed by another party outside of the control of our algorithm. If no additional function values are available, we set . We define notation and . We require that all points in be distinct.
We then combine these previously observed function values with our prior to obtain a posterior distribution on . This posterior distribution is still a multivariate normal (e.g., see Eq. (A.6) on pp. 200 in Rasmussen and Williams 2006)
(2) 
with
(3) 
where
is the vector obtained by evaluating the prior mean function at each point in
, is a matrix with , and similarly for , and .2.3 Multipoints expected improvement (qEi)
In a parallel computing environment, we wish to use this posterior distribution to choose the set of points to evaluate next. Ginsbourger et al. (2007) proposed making this choice using a decisiontheoretic approach that considers the utility provided by evaluating a particular candidate set of points in terms of their ability to reveal better solutions than previously known. We review this decisiontheoretic approach here, and then present a new algorithm for implementing this choice in the next section.
Let be the number of function evaluations that we may perform in parallel, and let be a candidate set of points that we are considering evaluating next. Let indicate the value of the best point evaluated, before beginning these new function evaluations. The value of the best point evaluated after all function evaluations are complete will be . The difference between these two values (the values of the best point evaluated, before and after these new function evaluations) is called the improvement, and is equal to , where for .
We then compute the expectation of this improvement over the joint probability distribution over
, and we refer to this quantity as the multipoints expected improvement or qEI from Ginsbourger et al. (2007). This multipoints expected improvement can be written as,(4) 
where is the expectation taken with respect to the posterior distribution.
Ginsbourger et al. (2007) then proposes evaluating next the set of points that maximize the multipoints expected improvement,
(5) 
where .
This formulation generalizes Ginsbourger et al. (2007) slightly by allowing an optional requirement that new evaluation points be a distance of at least from each other and previously evaluated points. Ginsbourger et al. (2007) implicitly set . Our convergence proof requires
, which provides a compact feasible domain over which the stochastic gradient estimator has bounded variance. Setting a strictly positive
can also improve numerical stability in inference (see, e.g., Ababou et al. 1994), and evaluating a point extremely close to a previously evaluated point is typically unlikely to provide substantial improvement in the revealed objective value. In our experiments we set .In the special case , which occurs when we are operating without parallelism, the multipoints expected improvement reduces to the expected improvement (Mockus 1989, Jones et al. 1998), which can be evaluated in closedform in terms of the normal density and CDF as discussed in Section 1. Ginsbourger et al. (2007) provided an analytical expression for qEI when , but in the same paper the authors commented that computing qEI for involves expensivetocompute dimensional Gaussian cumulative distribution functions relying on multivariate integral approximation, which makes solving (5) difficult. Ginsbourger (2009) writes “directly optimizing the qEI becomes extremely expensive as q and d (the dimension of inputs) grow."
3 Algorithm
In this section we present a new algorithm for solving the inner optimization problem (5) of maximizing qEI. This algorithm uses a novel estimator of the gradient of the qEI presented in Section 3.2, used within a multistart stochastic gradient ascent framework as described in Section 3.3. We additionally generalize this technique from synchronous to asynchronous parallel optimization in Section 3.4. We begin by introducing additional notation used to describe our algorithm.
3.1 Notation
In this section we define additional notation to better support construction of the gradient estimator. Justified by (2), we write as
(6) 
where is the lower triangular matrix obtained from the Cholesky decomposition of in (2), is the posterior mean (identical to in (2), but rewritten here to emphasize the dependence on and deemphasize the dependence on ), and is a multivariate standard normal random vector. We will also use the notation in place of in our analysis.
3.2 Constructing the gradient estimator
We now construct our estimator of the gradient . Let
(10) 
Then
(11) 
If gradient and expectation in (11) are interchangeable, the gradient would be
(12) 
where
(13) 
can be computed using results on differentiation of the Cholesky decomposition from Smith (1995).
We use as our estimator of the gradient , and will discuss interchangeability of gradient and expectation, which implies unbiasedness of our gradient estimator, in Section 4.1. As will be discussed in Section 4.2, unbiasedness of the gradient estimator is one of the sufficient conditions for convergence of the stochastic gradient ascent algorithm proposed in Section 3.3.
3.3 Optimization of qEi
Our stochastic gradient ascent algorithm begins with some initial point , and generates a sequence using
(14) 
where denotes the closest point in to , and if the closest point is not unique, a closest point such that the function is measurable. is an estimate of the gradient of at , obtained by averaging replicates of our stochastic gradient estimator,
(15) 
where {
: m=1, …, M} are i.i.d. samples generated from the multivariate standard normal distribution,
is defined in (13). is a stochastic gradient stepsize sequence (Kushner and Yin 2003), typically chosen to be equal to for some scalar and . Because we use PolyakRuppert averaging as described below, we set . Analysis in Section 4 shows that, under certain mild conditions, this stochastic gradient algorithm converges almost surely to the set of stationary points.After running iterations of stochastic gradient ascent using (14), we obtain the sequence . From this sequence we extract the average and use it as an estimated stationary point. This PolyakRuppert averaging approach (Polyak 1990, Ruppert 1988) is more robust to misspecification of the stepsize sequence than using directly.
To find the global maximum of the qEI, we use multiple restarts of the algorithm from a set of starting points, drawn from a Latin hypercube design (McKay et al. 2000), to find multiple stationary points, and then use simulation to evaluate qEI at these stationary points and select the point for which it is largest. For simplicity we present our approach using a fixed sample size to perform this evaluation and selection (Step 8 in Algorithm 1 below) but one one could also use a more sophisticated ranking and selection algorithm with adaptive sample sizes (see, e.g., Kim and Nelson 2007), or evaluate qEI using the closedform formula in Chevalier and Ginsbourger (2013). We summarize our procedure for selecting the set of points to sample next, which we call MOEqEI, in Algorithm 1.
The MOE software package (Clark et al. 2014) implements Algorithm 1, and supplies the following additional optional fallback logic. If , so that multistart stochastic gradient ascent fails to find a point with estimated expected improvement better than , then it generates additional solutions from a Latin Hypercube on , estimates the qEI at each of these using the same Monte Carlo approach as in Step 8, and selects the one with the largest estimated qEI. This logic takes two additional parameters: a strictly positive real number and an integer . We turn this logic off in our experiments by setting .
3.4 Asynchronous parallel optimization
So far we have assumed synchronous parallel optimization, in which we wait for all points to finish before choosing a new set of points. However, in some applications, we may wish to generate a new partial batch of points to evaluate next while points are still being evaluated, before we have their values. This is common in expensive computer evaluations, which do not necessarily finish at the same time.
We can extend Algorithm 1 to solve an extension of (5) proposed by Ginsbourger et al. (2010) for asynchronous parallel optimization: suppose parallelization allows a batch of points to be evaluated simultaneously; the first points are still under evaluation, while the remaining points have finished evaluation and the resources used to evaluate them are free to evaluate new points. We let be the first points still under evaluation, and let be the points ready for new evaluations. Computation of qEI for these points remains the same as in (4), but we use an alternative notation, , to explicitly indicate that are the points still being evaluated and are the new points to evaluate. Keeping fixed, we optimize qEI over by solving this alternative problem
(16) 
where . As we did in the algorithm for synchronous parallel optimization in Section 3.3, we estimate the gradient of the objective function with respect to , i.e., . The gradient estimator is essentially the same as that in Section 3.2, except that we only differentiate with respect to . Then we proceed according to Algorithm 1.
In practice, one typically sets . This is because Bayesian optimization procedures are used most frequently when function evaluation times are large, and asynchronous computing environments typically have a time between evaluation completions that increases with the evaluation time. When this intercompletion time is large relative to the time required to solve (16), it is typically better to solve (16) each time an evaluation completes, i.e., to set , rather than waiting and letting a machine sit idle. If the time to perform a function evaluation is small enough, or if the computing environment is especially homogeneous, or if is large (shortening the intercompletion time), then the time between completions might be smaller than the time to solve (16) and one might wish to set strictly smaller than .
4 Theoretical analysis
In Section 3, when we constructed our gradient estimator and described the use of stochastic gradient ascent to optimize qEI, we alluded to conditions under which this gradient estimator is unbiased and this stochastic gradient ascent algorithm converges to the set of stationary points of the qEI surface. In this section, we describe these conditions and state these results.
4.1 Unbiasedness of the gradient estimator
We now state our main theorem showing unbiasedness of the gradient estimator. Proofs of all results including supporting lemmas are available as supplemental material.
Theorem 1
If and are continuously differentiable in a neighborhood of , and has no duplicate rows, then exists almost surely and
Theorem 1 requires continuous differentiability of , which may seem difficult to verify. However, using Smith (1995), which shows that thorder differentiability of a symmetric and nonnegative definite matrix implies thorder differentiability of the lower triangular matrix obtained from its Cholesky factorization, and thus have the same order of differentiability as , whose order of differentiability can in turn be verified by examination of the prior covariance function . In addition, when is positive definite, will not have duplicate rows. We will use these facts below in Corollary 1, after first discussing convergence, to provide easytoverify conditions under which unbiasedness and convergence to the set of stationary points hold.
4.2 Convergence analysis
In this section, we show almost sure convergence of our proposed stochastic gradient ascent algorithm. We assume that is compact and can be written in the form , where is any realvalued constraint function. Then can be written in a form more convenient for analysis,
where with being the th point in , and for encodes the constraints and present in (5).
The following theorem shows that Algorithm 1 converges to the set of stationary points under conditions that include those of Theorem 1. The proof is available as supplemental material.
Theorem 2
Suppose the following assumptions hold,

are continuously differentiable.

for ; and .

, and are twice continuously differentiable and is positive definite.
Then the sequence and its PolyakRuppert average generated by algorithm (14) converges almost surely to a connected set of stationary points of the qEI surface.
The following corollary of Theorem 2 uses conditions that can be more easily checked prior to running MOEqEI. It requires that the sampled points are distinct, which can be made true by dropping duplicate samples. Since function evaluations are deterministic, no information is lost in doing so.
Corollary 1
If the sampled points are distinct and

the prior covariance function is positive definite and twice differentiable,

the prior mean function is twice differentiable,

conditions 1 and 2 in Theorem 2 are met,
then and its PolyakRuppert average converge to a connected set of stationary points.
Proof 1
Proof to Corollary 1 Since are distinct, and , and the prior covariance function is positive definite and twice continuously differentiable, then , , and in (3) are all positive definite and twice continuously differentiable. Since the prior mean function is also twice continuously differentiable, it follows that and defined in (3) are twice continuously differentiable, and in addition, is positive definite. Thus the conditions of Theorem 2 are verified, and its conclusion holds.
5 Numerical results
In this section, we present numerical experiments demonstrating the performance of MOEqEI. The implementation of MOEqEI follows Algorithm 1, and is available in the open source software package “MOE” (Clark et al. 2014).
We first discuss the choice of constants in Algorithm 1: , , , , , and .

Number of starting points, : this should be larger and of the same order as the number of equivalence classes of stationary points of the qEI surface, where we identify a set of stationary points as in the same class if they can be obtained from each other by permuting . (qEI is symmetric and such permutations do not change its value.) However, we do not know the number of such equivalence classes, and their number tends to grow with as the surface grows more modes. Setting larger increases our chance of finding the global maximum but increases computation. In our numerical experiments, we set to capture this tradeoff between runtime and solution quality. As a diagnostic, one can check whether is large enough by checking the number of unique solutions we obtain; if we obtain the same solution repeatedly from multiple restarts, this suggests is large enough.

Number of steps in stochastic gradient ascent, , and stepsize sequence parameters and : For simplicity, we set . We set , which is significantly below , to ensure that the stepsize sequence decreases slowly, as is recommended when using PolyakRuppert averaging. We then plotted the qEI and norm of the gradient from stochastic gradient ascent versus for a few sample problems. Finding that convergence occurred well before the 100th iterate, we set . As a diagnostic, one may also assess convergence by evaluating the gradient using a large number of Monte Carlo samples at the final iterate and comparing its norm to .

Number of Monte Carlo samples : this determines the accuracy of the gradient estimate and therefore affects stochastic gradient ascent’s convergence. We set and as discussed in Section 3.3 performed an experiment to justify this setting. While we ran our experiments on a CPU except where otherwise stated to ensure a fair comparison with other competing algorithms, for which a GPU implementation is not available, the “MOE” software package provides a GPU implementation that can be used to increase the amount of parallelism used in MOEqEI. When using the GPU implementation, we recommend setting because the GPU’s parallelism makes averaging a large number of independent replicates fast, and the reduction in noise reduces the number of iterates needed for convergence by stochastic gradient ascent.

Number of Monte Carlo samples for estimating qEI, : we estimate the qEI at a limiting solution only once for each restart, i.e., times, and so setting large introduces little computational overhead. We set
to ensure an essentially noisefree selection of the best of the limiting solutions, and we assess this choice by examining the standard error of our estimates of the
qEI.
For the outer optimization of the objective function, we begin with a small dataset, typically sampled using a Latin hypercube design, to train the Gaussian Process model described in Section 2.2. In our numerical experiments, we use and a squared exponential kernel
whose hyperparameters are estimated using an empirical Bayes approach: we set them to the values that maximize the log marginal likelihood of the observed data. With the trained Gaussian Process model, we perform the inner optimization of MOEqEI described in Algorithm
1 to find the batch of points to evaluate, and after evaluating them we update the hyperparameters as well as the Gaussian Process model. We repeat this process over a number of iterations and report the best solution found in each iteration.Noisefree function evaluations may often lead to illconditioned covariance matrices in (2). To resolve this problem, we adopt a standard trick from Gaussian process regression (Rasmussen and Williams 2006, Section 3.4.3): we manually impose a small amount of noise where and use Gaussian Process regression designed for noisy settings, which is almost identical to (2) except that is replaced by where
is the identity matrix
(Rasmussen and Williams 2006, Section 2.2).5.1 Comparison on the outer optimization problem
Constant Liar is a heuristic algorithm motivated by (9) proposed by Ginsbourger et al. (2007), which uses a greedy approach to iteratively construct a batch of points. At each iteration of this greedy approach, the heuristic uses the sequential EGO algorithm to find a point that maximizes the expected improvement. However, since the posterior used by EGO depends on the current batch of points, which have not yet been evaluated, Constant Liar imposes a heuristic response (the “liar”) at this point, and updates the Gaussian Process model with this “liar” value. The algorithm stops when points are added, and reports the batch for function evaluation.
vs. iteration for MOEqEI (solid line) and CLmix (dashed line), where the error bars show 95% confidence intervals obtained from 100 repeated experiments with different sets of initial points. MOEqEI converges faster with better solution quality than the heuristic method CLmix for all
.There are three variants of Constant Liar (CL), which use three different strategies for choosing the liar value: CLmin sets the liar value to the minimum response observed so far; CLmax sets it to the maximum response observed so far; and CLmix is a hybrid of the two, computing one set of points using CLmin, another set of points using CLmax, and sampling the set that has the higher qEI. Among the three methods, CLmix was shown by Chevalier and Ginsbourger (2013) to have the best overall performance, and therefore we compare MOEqEI against CLmix.
We ran MOEqEI and CLmix on a range of standard test functions for global optimization (Jamil and Yang 2013): 2dimensional Branin2; 3dimensional Hartmann3; 5dimensional Ackley5; and 6dimensional Hartmann6. In the experiment, we first draw points in the domain using a Latin hypercube design, where is the dimension of the objective function, and fit a Gaussian Process model using the initial points. Thereafter, we let MOEqEI and CLmix optimize over the test functions and report for each iteration the regret as for each iteration, where we note that each of the problems considered is a minimization problem. We repeat the experiment 100 times using different initial sets of points, and report the average performance of both algorithms in Figure 1. The result shows that MOEqEI consistently finds better solutions than the heuristic method on all four test functions.
Next, we compare MOEqEI and CLMIX at different levels of parallelism using the same experimental setup as above. The sequential EGO algorithm makes the same decisions as MOEqEI when and so this may also be seen as a comparison against EGO. Figure 2 shows that MOEqEI achieves significant speedup over EGO as grows, indicating substantial potential time saving using parallelization and MOEqEI in Bayesian optimization tasks.
5.2 Comparison on the inner optimization problem
Chevalier and Ginsbourger (2013) provided a closedform formula for qEI and argued that it computes qEI “very fast for reasonably low values of (typically less than 10)”. The closedform formula is provided as follows for reference, modified to use the notation in this paper. Recall (2) and let be a random vector with mean and covariance matrix . For consider the vectors defined as follows:
Let and denote the mean and covariance matrix of , and define the vector by and if . Then the closedform formula is
(17) 
where is as defined in Chevalier and Ginsbourger (2013).
This formula requires calls of the dimensional multivariate normal CDF (), and calls of the dimensional multivariate normal CDF (). Since computing multivariate normal CDFs, which are often implemented with numerical integration or Monte Carlo sampling (Genz 1992), is expensive to evaluate even for moderate , calculating this analytically formula quickly becomes slow and numerically challenging as grows.
While Chevalier and Ginsbourger (2013) did not propose using this closedform formula to solve the inner optimization problem (5), one can adapt it to this purpose by using it within any derivativefree optimization method. We implemented this approach in the MOE package, where we use the LBFGS (Liu and Nocedal 1989) solver from SciPy (Jones et al. 2001) as the derivative free optimization solver. We call this approach “Benchmark 1”.
We compare MOEqEI, CLmix, and Benchmark 1 in solving the inner optimization problem, in terms of both solution quality and runtime, as shown in Figure 3 and 4. Without surprise, MOEqEI achieves the best solution quality among the three, and its running time is almost comparable to CLmix, which is expected to be the fastest approach because it sacrifices solution quality for speed. We ran Benchmark 1 with going only up to 4 because its runtime goes up drastically with . MOEqEI’s runtime scales well as grows, making it feasible to run in applications with high parallelism. To our surprise, CLmix achieves competitive solution quality against Benchmark 1, using only a fraction of Benchmark 1’s runtime. Therefore, despite the promise of using the closedform formula for qEI to fully solve the inner optimization problem, this formula’s long runtime and the slow convergence of LBFGS due to lack of derivative information make Benchmark 1 a less favorable option than CLmix in practice.
Figure 3 also includes a method called “MOE highMC”. This method runs MOEqEI on GPU with the number of Monte Carlo samples for the gradient estimator set to , much higher than the default setting of 1000. As shown in the figure, the solution quality for “MOE highMC” is the same as that of MOEqEI, which confirms that is sufficiently large.
5.3 Comparison on evaluation of
A recently published book chapter Marmin et al. (2015), developed independently and in parallel to this work, proposed a method for computing using a closedform formula derived from (17), and then proposed to use this formula inside a gradientbased optimization routine to solve (5). The formula is complex and therefore we do not reproduce it here.
This formula faces even more severe computational challenges than (17); indeed, it requires calls to multivariate normal CDFs with dimension between and . Because computing highdimensional multivariate normal CDFs is itself challenging, this closedform evaluation becomes extremely timeconsuming.
MOEqEI’s MonteCarlo based approach to evaluating offers three advantages over using the closedform formula: first, numerical experiments below suggest that computation scales better with ; second, it can be easily parallelized, with significant speedups possible through parallel computing on graphical processing units (GPUs), as is implemented within the MOE library; third, by using a small number of replications to make each iteration run quickly, and by using it within a stochastic gradient ascent algorithm that averages noisy gradient information intelligently across iterations, we may more intelligently allocate effort across iterations, only spending substantial effort to estimate gradients accurately late in the process of finding a local maximum.
We first show that computation of exact gradients using our gradient estimator with many replications on a GPU scales better with through numerical experiments. We compare with closedform gradient evaluation on a CPU as implemented in the “DiceOptim” package (Ginsbourger et al. 2015) and call it “Benchmark 2”. We computed at 200 randomly chosen points from a 2dimensional design space to obtain a confidence interval for the average computation time. To make the gradient evaluation in MOEqEI close to exact, we increased the number of Monte Carlo samples used in the gradient estimator to , which ensures that the variance of each component of the gradient is on the order of or smaller for all we consider in our experiments. Given the large number of Monte Carlo samples, we use the GPU option in the MOE package to speed up computation. This GPU implementation is made possible by the trivial parallelism supported by our MonteCarlobased gradient estimator, while a massively parallel GPU implementation of closedform gradient evaluation would be more challenging.
Figure 5 shows that computational time for Benchmark 2 increases quickly as grows, but increases slowly for MOEqEI’s Monte Carlo estimator, with this Monte Carlo estimator being faster when . This difference in performance arises because gradient estimation in MOEqEI focuses Monte Carlo effort on calculating a single highdimensional integral, while the closedform formula decomposes this highdimensional integral of interest into a collection of other highdimensional integrals that are almost equally difficult to compute, and the size of this collection grows with .
We may reduce the number of Monte Carlo samples used by MOEqEI substantially while still providing highaccuracy estimates: if we reduce from to , the variance of each component of the gradient remains below . Since the GPU implementation provides a roughly 100x to 1000x speedup, the CPUonly implementation of our stochastic gradient estimator with this reduced value of has runtime comparable to or better than the GPUbased results pictured in Figure 5, showing that even without the hardware advantage offered by a GPU our stochastic gradient estimator provides highaccuracy estimates faster than Marmin et al. (2015) for .
Moreover, because stochastic gradient ascent is tolerant to noisy gradients, we may obtain additional speed improvements by reducing the number of Monte Carlo samples even further. Using fewer Monte Carlo samples in each iteration has the potential to increase efficiency by only putting effort toward estimating the gradient precisely when we are close to the stationary point, which stochastic gradient ascent performs automatically through its decreasing stepsize sequence. Thus, our stochastic gradient estimator is both faster when using a large number of samples to produce essentially exact estimates, and it offers more flexibility in its ability to produce inexact estimates at low computational cost.
6 Conclusions
We proposed an efficient method based on stochastic approximation for implementing a conceptual parallel Bayesian global optimization algorithm proposed by Ginsbourger et al. (2007). To accomplish this, we used infinitessimal perturbation analysis (IPA) to construct a stochastic gradient estimator and showed that this estimator is unbiased. We also provided convergence analysis of the stochastic gradient ascent algorithm with the constructed gradient estimator. Through numerical experiments, we demonstrate that our method outperforms the existing stateoftheart approximation methods.
References
 Ababou et al. (1994) Ababou, R., Bagtzoglou, A. C., and Wood, E. F. (1994). On the condition number of covariance matrices in kriging, estimation, and simulation of random fields. Mathematical Geology, 26(1):99–133.
 Amatriain (2014) Amatriain, X. (2014). 10 lessons learned from building ml systems. https://www.youtube.com/watch?v=WdzWPuazLA8. Recording of presentation from MLconf 2014, Accessed: 20151126.
 Boender and Kan (1987) Boender, C. G. E. and Kan, A. R. (1987). Bayesian stopping rules for multistart global optimization methods. Mathematical Programming, 37(1):59–80.
 Brochu et al. (2010) Brochu, E., Brochu, T., and de Freitas, N. (2010). A bayesian interactive optimization approach to procedural animation design. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 103–112. Eurographics Association.
 Calvin et al. (2005) Calvin, J. et al. (2005). Onedimensional global optimization for observations with noise. Computers & Mathematics with Applications, 50(1):157–169.
 Calvin (1997) Calvin, J. M. (1997). Average performance of a class of adaptive algorithms for global optimization. The Annals of Applied Probability, 7(3):711–730.
 Calvin and Žilinskas (2002) Calvin, J. M. and Žilinskas, A. (2002). Onedimensional global optimization based on statistical models. In Stochastic and Global Optimization, pages 49–63. Springer.
 Chevalier and Ginsbourger (2013) Chevalier, C. and Ginsbourger, D. (2013). Fast computation of the multipoints expected improvement with applications in batch selection. In Learning and Intelligent Optimization, pages 59–69. Springer.

Clark (2014)
Clark, S. (2014).
Introducing “MOE”
: metric optimization engine; a new open source, machine learning service for optimal experiment design.
http://engineeringblog.yelp.com/2014/07/introducingmoemetricoptimizationengineanewopensourcemachinelearningserviceforoptimalex.html. Accessed: 20151126.  Clark et al. (2014) Clark, S. C., Liu, E., Frazier, P. I., Wang, J., Oktay, D., and Vesdapunt, N. (2014). Metrics optimization engine. http://yelp.github.io/MOE/. Accessed: 20170917.
 Dennis and Torczon (1991) Dennis, Jr, J. E. and Torczon, V. (1991). Direct search methods on parallel machines. SIAM Journal on Optimization, 1(4):448–474.
 Frazier et al. (2009) Frazier, P., Powell, W., and Dayanik, S. (2009). The knowledgegradient policy for correlated normal beliefs. INFORMS journal on Computing, 21(4):599–613.
 Frazier and Wang (2016) Frazier, P. I. and Wang, J. (2016). Bayesian optimization for materials design. In Information Science for Materials Discovery and Design, pages 45–75. Springer.
 Frazier et al. (2011) Frazier, P. I., Xie, J., and Chick, S. E. (2011). Value of information methods for pairwise sampling with correlations. In Proceedings of the Winter Simulation Conference, pages 3979–3991. Winter Simulation Conference.
 Genz (1992) Genz, A. (1992). Numerical computation of multivariate normal probabilities. Journal of computational and graphical statistics, 1(2):141–149.
 Ginsbourger (2009) Ginsbourger, D. (2009). Two advances in gaussian processbased prediction and optimization for computer experiments. In MASCOT09 Meeting.
 Ginsbourger et al. (2007) Ginsbourger, D., Le Riche, R., and Carraro, L. (2007). A multipoints criterion for deterministic parallel global optimization based on kriging. In NCP07.
 Ginsbourger et al. (2010) Ginsbourger, D., Le Riche, R., and Carraro, L. (2010). Kriging is wellsuited to parallelize optimization. Computational Intelligence in Expensive Optimization Problems, 2:131–162.
 Ginsbourger et al. (2015) Ginsbourger, D., Picheny, V., Roustant, O., et al. (2015). Diceoptim: Krigingbased optimization for computer experiments. https://cran.rproject.org/web/packages/DiceOptim/index.html. Accessed: 20160213.
 Glasserman (1991) Glasserman, P. (1991). Gradient estimation via perturbation analysis. Springer Science & Business Media.
 Ho (1987) Ho, Y.C. (1987). Performance evaluation and perturbation analysis of discrete event dynamic systems. IEEE Transactions on Automatic Control, 32(7):563–572.

Holland (1992)
Holland, J. H. (1992).
Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence
. MIT press.  Howard (1966) Howard, R. A. (1966). Information value theory. IEEE Transactions on Systems Science and Cybernetics, 2(1):22–26.
 Huang et al. (2006) Huang, D., Allen, T. T., Notz, W. I., and Zeng, N. (2006). Global optimization of stochastic blackbox systems via sequential kriging metamodels. Journal of global optimization, 34(3):441–466.
 Jamil and Yang (2013) Jamil, M. and Yang, X.S. (2013). A literature survey of benchmark functions for global optimisation problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150–194.
 Jones et al. (1998) Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient global optimization of expensive blackbox functions. Journal of Global optimization, 13(4):455–492.
 Jones et al. (2001) Jones, E., Oliphant, T., Peterson, P., et al. (2001). SciPy: Open source scientific tools for Python. [Online; accessed 20141201].
 Kennedy (2010) Kennedy, J. (2010). Particle swarm optimization. In Encyclopedia of Machine Learning, pages 760–766. Springer.
 Kim and Nelson (2007) Kim, S.H. and Nelson, B. L. (2007). Recent advances in ranking and selection. In Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come, pages 162–172. IEEE Press.
 Kushner and Yin (2003) Kushner, H. and Yin, G. G. (2003). Stochastic approximation and recursive algorithms and applications, volume 35. Springer Science & Business Media.
 Kushner (1964) Kushner, H. J. (1964). A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. Journal of Fluids Engineering, 86(1):97–106.
 Liu and Nocedal (1989) Liu, D. C. and Nocedal, J. (1989). On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(13):503–528.
 Marmin et al. (2015) Marmin, S., Chevalier, C., and Ginsbourger, D. (2015). Differentiating the multipoint expected improvement for optimal batch design. In Machine Learning, Optimization, and Big Data, pages 37–48. Springer.
 McKay et al. (2000) McKay, M. D., Beckman, R. J., and Conover, W. J. (2000). A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 42(1):55–61.
 Mockus (1989) Mockus, J. (1989). The bayesian approach to local optimization. Springer.
 Mockus et al. (1978) Mockus, J., Tiesis, V., and Zilinskas, A. (1978). The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117129):2.
 Polyak (1990) Polyak, B. T. (1990). New stochastic approximation type procedures. Automat. i Telemekh, 7(98107):2.
 Rasmussen and Williams (2006) Rasmussen, C. and Williams, C. (2006). Gaussian Processes for Machine Learning. MIT Press.
 Ruppert (1988) Ruppert, D. (1988). Efficient estimations from a slowly convergent robbinsmonro process. Technical report, Cornell University Operations Research and Industrial Engineering.
 Sacks et al. (1989) Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P. (1989). Design and analysis of computer experiments. Statistical Science, 4(4):409–423.
 Smith (1995) Smith, S. P. (1995). Differentiation of the cholesky algorithm. Journal of Computational and Graphical Statistics, 4(2):134–147.
 Snoek et al. (2012) Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959.
 Vazquez and Bect (2010) Vazquez, E. and Bect, J. (2010). Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. Journal of Statistical Planning and inference, 140(11):3088–3095.
 Villemonteix et al. (2009) Villemonteix, J., Vazquez, E., and Walter, E. (2009). An informational approach to the global optimization of expensivetoevaluate functions. Journal of Global Optimization, 44(4):509–534.
 Xie et al. (2016) Xie, J., Frazier, P. I., and Chick, S. E. (2016). Bayesian optimization via simulation with pairwise sampling and correlated prior beliefs. Operations Research, 64(2):542–559.
Appendix: proofs of results in the main paper
Lemma 1
Suppose functions , are continuously differentiable on a compact interval . Let , and be the set of points where fails to be differentiable. Then is countable.
Proof 2
Proof of Lemma 1. We first consider , and later extend to . Let be a point of nondifferentiability of . Then and . (If the first condition were not true, and suppose , then continuity of and would imply in an open neighborhood of . If the first condition were true but not the second, then .)
By continuity of and , such that for all . Therefore at and at . Thus is differentiable on .
Let be the smallest integer such that is differentiable on and let be the set of nondifferentiable points such that . In an interval of length , there can be at most points in . Hence the set of all nondifferentiable points is countable.
Now let . We show that all points of discontinuity of are also points of discontinuity of for at least one pair of . Let . Using Taylor’s theorem, for ,
where is a function such that . We write the left and right derivative of at as
and
If the left and right derivative are equal, is differentiable at . If not, let and . Then fails to be differentiable at .
Thus the nondifferentiable points of are a subset of the union of the nondifferentiable points of over all , and so it is a subset of a finite union of countable sets, which is countable.
Lemma 2
If and are differentiable in a neighborhood of , and there are no duplicated rows in , then for any , and exists almost surely for any .
Proof 3
Proof of Lemma 2. Observe that , where . can fail to exist only if with . Thus,
where is the th row of . Since , is subspace of with dimension smaller than , and
Hence .
Proof 4
Proof of Theorem 1. Without loss of generality, we consider the partial derivative with respect to the th component of the th point in , that is, . We use the following result in Theorem 1.2. from Glasserman (1991), restated here for convenience:
Suppose the following conditions (1), (2), (3) and (4) hold on a compact interval , then on , where .

For all and , is a.s. differentiable at .

Define to be the subset of on which is continuously differentiable. For all , .

is a.s. continuous and piecewise differentiable throughout .

is countable and , where is the random collection of points in at which fails to be differentiable.
We apply this result with , equal to the random function mapping to the random vector , , and equal to the set of at which does not exist.
Condition (1) is satisfied because and are assumed differentiable.
For condition (2), the set of points at which is continuously differentiable is . Lemma 2 implies that the probability of equality between two components of is 0, and so .
For condition (3), it is obvious that is a.s. continuous. Lemma 1 implies that the set of nondifferentiable points is countable, and therefore is a.s. piecewise differentiable.
For condition (4), first is countable by Lemma 1. We now show the second part of condition (4). Fix except for . Since the interval is compact and and are continuously differentiable,
Then
where and . Therefore, condition (4) is satisfied.
Thus the conditions of Theorem 1.2 from Glasserman (1991) are satisfied and