1 Introduction
Bayesian optimization (BO), as applied to socalled blackbox objectives, is a modernization of 197080s statistical response surface methodology for sequential design (Mockus et al., 1978; Box and Draper, 1987; Mockus, 1989, 1994). In BO, nonparametric (Gaussian) processes (GPs, Rasmussen and Williams, 2006) provide flexible response surface fits. Sequential design decisions, socalled acquisitions, judiciously balance exploration and exploitation in search for global optima. For a review of GP surrogate modeling and optimization in the context of computer experiments, see Santner et al. (2003), Booker et al. (1999) and Bingham et al. (2014)
. For a machine learning perspective, see
Brochu et al. (2010), and Boyle (2007). Until recently, most works in these literatures have focused on unconstrained optimization.Many interesting problems contain constraints, typically specified as equalities or inequalities:
(1) 
where is a scalarvalued objective function, and and
are vectorvalued constraint functions taken componentwise (i.e.,
, ; , and ). We differentiate these constraints from , which is a known (not blackbox) set, typically containing bound constraints; here we assume that is a bounded hyperrectangle.^{1}^{1}1The presumption throughout is that a solution of (1) exists so that the feasible region is nonempty; however, the algorithms we describe can provide reasonable approximate solutions when the feasible region is empty, so long as the modeling assumptions (e.g., smoothness and stationarity) on , , and , and constraint qualifications (see, e.g., (Nocedal and Wright, 2006)) are not egregiously violated.The typical setup treats , , and as a “joint” blackbox, meaning that providing to a single computer code reveals , , and simultaneously, often at great computational expense. A common special case treats as known (e.g., linear); however, the problem is still hard when defines a nonconvex valid region. A tacit goal in this setup is to minimize the number of blackbox runs required to solve the problem, and a common meter of progress tracks the best valid value of the objective (up to a relaxing threshold on the equality constraints ) at increasing computational budgets (# of evaluations, ).
Not many algorithms target global solutions to this general, constrained blackbox optimization problem. Statistical methods are acutely few. We know of no methods from the BO literature natively accommodating equality constraints, let alone mixed (equality and inequality) ones. Schonlau et al. (1998)
describe how their expected improvement (EI) heuristic can be extended to multiple inequality constraints by multiplying by an estimated probability of constraint satisfaction. Here, we call this
expected feasible improvement (EFI). EFI has recently been revisited by several authors (Snoek et al., 2012; Gelbart et al., 2014; Gardner et al., 2014). However, the technique has pathological behavior in otherwise idealized setups (Gramacy et al., 2016b), which is related to a socalled “decoupled” pathology (Gelbart et al., 2014). Some recent informationtheoretic alternatives have shown promise in the inequality constrained setting (HernándezLobato et al., 2015; Picheny, 2014); tailored approaches have shown promise in more specialized constrained optimized setups (Gramacy and Lee, 2011; Williams et al., 2010; Parr et al., 2012).We remark that any problem with equality constraints can be “transformed” to inequality constraints only, by applying and simultaneously. However, the effect of such a reformulation is rather uncertain. It puts doubleweight on equalities and violates certain regularity (i.e., constraint qualification (Nocedal and Wright, 2006)) conditions. Numerical issues have been reported in empirical evaluations (Sasena, 2002). In our own empirical work [Section 4] we find unfavorable performance.
In this paper we show how a recent technique (Gramacy et al., 2016a) for BO under inequality constraints is naturally enhanced to handle equality constraints, and therefore mixed ones too. The method involves converting inequality constrained problems into a sequence of simpler subproblems via the augmented Lagrangian (AL, Bertsekas, 1982)). ALbased solvers can, under certain regularity conditions, be shown to converge to locally optimal solutions that satisfy the constraints, so long as the subsolver converges to local solutions. By deploying modern BO on the subproblems, as opposed to the usual local solvers, the resulting metaoptimizer is able to find better, less local solutions with fewer evaluations of the expensive blackbox, compared to several classical and statistical alternatives. Here we dub that method ALBO.
To extend ALBO to equality constraints, we suggest the opposite transformation to the one described above: we convert inequality constraints into equalities by introducing slack variables. In the context of earlier work with the AL, via conventional solvers, this is rather textbook (Nocedal and Wright, 2006, Ch. 17). Handling the inequalities in this way leads naturally to solutions for mixed constraints and, more importantly, dramatically improves the original inequalityonly version. In the original (nonslack) ALBO setup, the density and distribution of an important composite random predictive quantity is not known in closed form. Except in a few particular cases (Picheny et al., 2016), calculating EI and related quantities under the AL required Monte Carlo integration, which means that acquisition function evaluations are computationally expensive, noisy, or both. A reformulated slackAL version emits a composite that has a known distribution, a socalled weighted noncentral Chisquare (WNCS) distribution. We show that, in that setting, EI calculations involve a simple 1d integral via ordinary quadrature. Adding slack variables increases the input dimension of the optimization subproblems, but only artificially so. The effects of expansion can be mitigated through optimal default settings, which we provide.
The remainder of the paper is organized as follows. Section 2 outlines the components germane to the ALBO approach: AL, Bayesian surrogate modeling, and acquisition via EI. Section 3 contains the bulk of our methodological contribution, reformulating the AL with slack variables and showing how the EI metric may be calculated in closed form, along with optimal default slack settings, and opensource software. Implementation details are provided in our Appendix. Section 4 provides empirical comparisons, and Section 5 concludes.
2 A review of relevant concepts: EI and AL
2.1 Expected improvement
The canonical acquisition function in BO is expected improvement (EI, Jones et al., 1998). Consider a surrogate , trained on pairs emitting Gaussian predictive equations with mean
. Define , the smallest value seen so far, and let be the improvement at . is largest when has substantial distribution below . The expectation of over has a convenient closed form, revealing balance between exploitation ( under ) and exploration (large ):(2) 
where () is the standard normal cdf (pdf). Accurate, approximately Gaussian predictive equations are provided by many statistical models (GPs, Rasmussen and Williams, 2006).
When the predictive equations are not Gaussian, Monte Carlo schemes—sampling variates from and averaging the resulting values—offer a suitable, if computationally intensive, alternative. Yet most of the theory applies only to the Gaussian case. Under certain regularity conditions (e.g., Bull, 2011), one can show that sequential designs derived by repeating the process, generating sized designs from sized ones, contain a point whose associated is arbitrarily close to the true minimum objective value, that is, the algorithm converges.
2.2 Augmented Lagrangian framework
Although several authors have suggested extensions to EI for constraints, the BO literature has primarily focused on unconstrained problems. The range of constrained BO options was recently extended by borrowing an apparatus from the mathematical optimization literature, the
augmented Lagrangian, allowing unconstrained methods to be adapted to constrained problems. The AL, as a device for solving problems with inequality constraints (no in Eq. (1)), may be defined as(3) 
where is a penalty parameter on constraint violation and serves as a Lagrange multiplier. AL methods are iterative, involving a particular sequence of . Given the current values and , one approximately solves the subproblem
(4) 
via a conventional (boundconstrained) solver. The parameters are updated depending on the nature of the solution found, and the process repeats. The particulars in our setup are provided in Alg. 1; for more details see (Nocedal and Wright, 2006, Ch. 17). Local convergence is guaranteed under relatively mild conditions involving the choice of subroutine solving (4). Loosely, all that is required is that the solver “makes progress” on the subproblem. In contexts where termination depends more upon computational budget than on a measure of convergence, as in many BO problems, that added flexibility is welcome.
However, the AL does not typically enjoy global scope. The local minima found by the method are sensitive to initialization—of starting choices for or ; local searches in iteration are usually started from . However, this dependence is broken when statistical surrogates drive search for solutions to the subproblems. Independently fit GP surrogates, for the objective and for the constraints, yield predictive distributions for and . Dropping the
superscripts, the AL composite random variable
(5) 
can serve as a surrogate for (3); however, it is difficult to deduce the full distribution of this variable from the components of and , even when those are independently Gaussian. While its mean is available in closed form, EI requires Monte Carlo.
3 A novel formulation involving slack variables
An equivalent formulation of (1) involves introducing slack variables, , for (i.e., one for each inequality constraint ), and converting the mixed constraint problem (1) into one with only equality constraints (plus bound constraints for ):
(6) 
The input dimension of the problem is increased from to with the introduction of the (positive) slack “inputs” .
Reducing a mixed constraint problem to one involving only equality and bound constraints is valuable insofar as one has good solvers for those problems. Indeed, the AL method is an attractive option here, but some adaptation is required. Suppose, for the moment, that the original problem (
1) has no equality constraints (i.e., ). In this case, a slack variablebased AL method is readily available—as an alternative to the one described in Section 2.2. Although we frame it as an “alternative”, some in the mathematical optimization community would describe this as the standard version (see, e.g., Nocedal and Wright, 2006, Ch. 17). The AL for this setup is given as follows:(7) 
This formulation is more convenient than (3) because the “max” is missing, but the extra slack variables mean solving a higher () dimensional subproblem compared to (4). In Section 3.3 we address this issue, and further detail adjustments to Alg. 1 for the slack variable case.
The AL can be expanded to handle equality (and thereby mixed constraints) as follows:
(8) 
Defining , , and enlarging the dimension of with the understanding that , leads to a streamlined AL for mixed constraints
(9) 
with . A nonslack AL formulation (3) may be written analogously as
with and . Eq. (9), by contrast, is easier to work with because it is a smooth quadratic in the objective () and constraints (). In what follows, we show that (9) facilitates calculation of important quantities like EI, in the GPbased BO framework, via a library routine. So slack variables not only facilitate mixed constraints in a unified framework, they also lead to a more efficient handling of the original inequality (only) constrained problem.
3.1 Distribution of the slackAL composite
If and represent random predictive variables from surrogates fitted to realized objective and constraint evaluations, then the analogous slackAL random variable is
(10) 
As in the original AL formulation, the mean of this RV has a simple closed form in terms of the means and variances of the surrogates, regardless of the form of predictive distribution. In the conditionally Gaussian case, we can derive the full distribution of the slackAL variate (
10) in closed form. Toward that aim, we redevelop the composite as follows:with . Now decompose the into a sum of three quantities:  
(11)  
Using , i.e., leveraging Gaussianity, can be written as
(12) 
The line above is the expression of a weighted sum of noncentral chisquare (WSNC) variates. Each of the
variates involves a unit degreesoffreedom (dof) parameter, and a noncentrality parameter
. A number of efficient methods exist for evaluating the density, distribution, and quantile functions of WSNC random variables
(e.g., Davies, 1980; Duchesne and Lafaye de Micheaux, 2010). The R packages CompQuadForm (Lafaye de Micheaux, 2013) and sadists (Pav, 2015) provide library implementations. Details and code are provided in Appendix C.Some constrained optimization problems involve an a priori known objective . In that case, referring back to (11), we are done: the distribution of is WSNC (as in (12)) shifted by a known quantity , determined by particular choices of inputs and slacks . When is conditionally Gaussian we have that is the weighted sum of a Gaussian and WNCS variates, a problem that is again wellstudied. The same methods cited above are easily augmented to handle that case—see Appendix C.
3.2 SlackAL expected improvement
Evaluating EI at candidate locations under the ALcomposite involves working with , given the current minimum of the AL over all runs. When is known, let absorb all of the nonrandom quantities involved in the EI calculation. Then, with denoting the distribution of ,
(13)  
if and zero otherwise. That is, the EI boils down to integrating the distribution function of between 0 (since is positive) and . This is a onedimensional definite integral that is easy to approximate via quadrature; details are in Appendix C. Since is quadratic in the values, it is often the case, especially for smaller values in later AL iterations, that is zero over most of , simplifying the numerical integration. However, this has deleterious impacts on search over . We discuss a useful adaptation for that case in Appendix B.2.
When is unknown and is conditionally normal, let . Then,
Here the lower bound of the definite integral cannot be zero since may be negative, and thus may have nonzero distribution for negative values. This presents some challenges when it comes to numerical integration via quadrature, although many library functions allow indefinite bounds. We obtain better performance by supplying a conservative finite lower bound, for example three standard deviations in , in units of the penalty (), below zero: . Detailed example implementations are provided Appendix C.
3.3 AL updates, optimal slacks, and other implementation notes
The new slackAL method is completed by describing when the subproblem (9) is deemed to be “solved” (step 2 in Alg. 1), how and updated (steps 3–4). We terminate the BO search subsolver after a single iteration as this matches with the spirit of EIbased search, whose choice of next location can be shown to be optimal, in a certain sense, if it is the final point being selected (Bull, 2011). It also meshes well with an updating scheme analogous to that in steps 3–4: updating only when no actual improvement (in terms of constraint violation) is realized by that choice. For those updates, the analog for the mixed constraint setup is
step 2: Let approx. solve
step 3: , for
step 4: If and , set ; else
Above, step 3 is the same as in Alg. 1 except without the “max”, and with slacks augmenting the constraint values. The “if” statement in step 4 checks for validity at , deploying a threshold on equality constraints; further discussion of the threshold is deferred to Section 4, where we discuss progress metrics under mixed constraints. If validity holds at , the current AL iteration is deemed to have “made progress” and the penalty remains unchanged; otherwise it is doubled. An alternate formulation may check . We find that the version in step 4, above, is cleaner because it limits sensitivity to the choice of threshold . In Appendix B.1 recommend initial values which are analogous to the original, nonslack AL settings.
Optimal choice of slacks: The biggest difference between the original AL (3) and slackAL (9) is that the latter requires searching over both and , whereas the former involves only values. In what follows we show that there are automatic choices for the values as a function of the corresponding ’s, keeping the search space dimensional, rather than .
For an observed value, associated slack variables minimizing the AL (9) can be obtained analytically. Using the form of (11), observe that is equivalent to . For fixed , this is strictly convex in . Therefore, its unconstrained minimum can only be its stationary point, which satisfies , for . Accounting for the nonnegativity constraint, we obtain the following optimal slack as a function of :
(14) 
Above we write as a function of to convey that remains a “free” quantity in . Recall that slacks on equality constraints are zero, , , for all .
In the blackbox setting, is only directly accessible at the data locations . At other values, however, the surrogates provide a useful approximation. When is (approximately) Gaussian it is straightforward to show that the optimal setting of the slack variables, solving , are , i.e., the same as (14) with a prediction for unknown value. Again, slacks on the equality constraints are set to zero.
Other criteria can be used to choose slack variables. Instead of minimizing the mean of the composite, one could maximize the EI. In Appendix A we explain how this is of dubious practical value. Compared to the settings described above, searching over EI is both more computationally intensive and provides near identical results in practice.
Implementation notes: Code supporting all methods in this manuscript is provided in two opensource R packages: laGP (Gramacy, 2015) and DiceOptim (Ginsbourger et al., 2015), both on CRAN (R Development Core Team, 2004). Implementation details vary somewhat across those packages, due primarily to particulars of their surrogate modeling capability and how they search the EI surface. For example, laGP can accommodate a smaller initial design size because it learns fewer parameters (i.e., has fewer degrees of freedom). DiceOptim uses a multistart search procedure for EI, whereas laGP deploys a random candidate grid, which may optionally be “finished” with an LBFGSB search. Nevertheless, their qualitative behavior exhibits strong similarity. Both packages also implement the original AL scheme (i.e., without slack variables) updated (8) for mixed constraints. Further details are provided in Appendix B.2.
4 Empirical comparison
Here we describe three test problems, each mixing challenging elements from traditional unconstrained blackbox optimization benchmarks, but in a constrained optimization format. We run our optimizers on these problems 100 times under random initializations. In the case of our GP surrogate comparators, this initialization involves choosing random spacefilling designs. Our primary means of comparison is an averaged (over the 100 runs) measure of progress defined by the best valid value of the objective for increasing budgets (number of evaluations of the blackbox), .
In the presence of equality constraints it is necessary to relax this definition somewhat, as the valid set may be of measure zero. In such cases we choose a tolerance and declare a solution to be “valid” when inequality constraints are all valid, and when for all . In our figures we choose ; however, the results are similar under stronger thresholds, with a higher variability over initializations. As finding a valid solution is, in itself, sometimes a difficult task, we additionally report the proportion of runs that find valid and optimal solutions as a function of budget, , for problems with equality (and mixed) constraints.
4.1 An inequality constrained problem
We first revisit the “toy” problem from Gramacy et al. (2016a), having a 2d input space limited to the unit cube, a (known) linear objective, with sinusoidal and quadratic inequality constraints (henceforth LSQ problem; see the Appendix D for details). Figure 1 shows progress over repeated solves with a maximum budget of 40 blackbox evaluations.
The lefthand plot in Figure 1 tracks the average best valid value of the objective found over the iterations, using the progress metric described above. Random initial designs of size were used, as indicated by the verticaldashed gray line. The solid gray lines are extracted from a similar plot from Gramacy et al. (2016a), containing both ALbased comparators, and several from the derivativefree optimization and BO literatures. The details are omitted here. Our new ALBO comparators are shown in thicker colored lines; the solid black line is the original AL(BO)EI comparator, under a revised (compared to (Gramacy et al., 2016a)) initialization and updating scheme. The two red lines are variations on the slackAL algorithm under EI: with (dashed) and without (solid) LBFGSB optimizing EI acquisition at each iteration. Finally, the blue line is PESC HernándezLobato et al. (2015), using the Python library available at https://github.com/HIPS/Spearmint/tree/PESC. The takehome message from the plot is that all four new methods outperform those considered by the original ALBO paper Gramacy et al. (2016a).
Focusing on the new comparators only, observe that their progress is nearly statistically equivalent during the first 20 iterations. At iteration 10, for example, the ordering from best to worst is PESC (0.861), SlackAL (0.902), OrigAL (0.942), and SlackAL+optim (0.957). Combining the 100 repetitions of each run, only the best (PESC) versus worst (SlackAL+optim) is statistically significant in a onesided test, with a value of 0.0011. Apparently, aggressively optimizing the EI in the slack formulation in early iterations hurts more than it helps. The situation is more nuanced, however, for later iterations. At iteration 30, for example, the ordering is SlackAL+optim (0.6002), PESC (0.6004), SlackAL (0.6010), and OrigAL (0.689). Although the numbers are more tightly grouped, only the first two (SlackAL+optim and PESC), both leveraging LBFGSB subroutines, are statistically equivalent. In particular, both SlackAL variants outperform the original AL with values less than . This discrepancy is more easily visualized in the left of the figure with a socalled “utilitygap” log plot (HernándezLobato et al., 2015), which tracks the log difference between the theoretical best valid value and those found by search.
4.2 Mixed inequality and equality constrained problems
Next consider a problem in four input dimensions with a (known) linear objective and two constraints, one inequality and one equality. The inequality constraint is the socalled “Ackley” function in input dimensions. The equality constraint follows the socalled “Hartman 4dimensional function”. Appendix D provides a full mathematical specification.
Figure 2 shows two views into progress on this problem. Since it involves mixed constraints, comparators from the BO literature are scarce. Our EFI implementation deploys the heuristic mentioned in the introduction. As representatives from the nonlinear optimization literature we include nlopt (Johnson, 2014) and three adapted NOMAD (Le Digabel, 2011) comparators, which are detailed in Appendix B.3. In the lefthand plot we can see that our new ALBO comparators are the clear winner, with an LBFGSB optimized EI search under the slackvariable AL implementation performing exceptionally well. The nlopt and NOMAD comparators are particularly poor. We allowed those to run up to 7000 and 1000 iterations, respectively, and in the plot we scaled the axis (i.e., ) to put them on the same scale as the others. The righthand plot provides a view into the distribution of two key aspects of performance over the MC repetitions. Observe that “Slack AL + optim” finds valid values quickly, and optimal values not much later. Our adapted EFI is particularly slow at converging to optimal (valid) solutions.
Our final problem involves two input dimensions, an unknown objective function (i.e., one that must be modeled with a GP), one inequality constraint and two equality constraints. The objective is a centered and rescaled version of the “Goldstein–Price” function. The inequality constraint is the sinusoidal constraint from the LSQ problem [Section 4.1]. The first equality constraint is a centered “Branin” function, the second equality constraint is taken from Parr et al. (2012) (henceforth GBSP problem). Appendix D contains a full mathematical specification.
Figure 3 shows our results on this problem. Observe (left panel) that the original ALBO comparator makes rapid progress at first, but dramatically slows for later iterations. The other ALBO comparators, including EFI, converge much more reliably, with the “Slack AL + optim” comparator leading in both stages (early progress and ultimate convergence). Again, nlopt and NOMAD are poor, however note that their relative comparison is reversed; again, we scaled the axis to view these on a similar scale as the others. The right panel shows the proportion of valid and optimal solutions for “Slack AL + optim” and EFI. Notice that the AL method finds an optimal solution almost as quickly as it finds a valid one—both substantially faster than EFI.
5 Discussion
The augmented Lagrangian (AL) is an established apparatus from the mathematical optimization literature, enabling unconstrained or boundconstrained optimizers to be deployed in settings with constraints. Recent work involving Bayesian optimization within the AL framework (ALBO) has shown great promise, especially toward obtaining global solutions under constraints. However, those methods were deficient in at least two respects. One is that only inequality constraints could be supported. Another was that evaluating the acquisition function, combining predictive mean and variance information via EI, required Monte Carlo approximation. In this paper we showed that both drawbacks could be addressed via a slackvariable reformulation of the AL. Our method supports inequality, equality, and mixed constraints, and to our knowledge this updated ALBO procedure is unique in the BO literature in its applicability to the general mixed constraints problem (1). We showed that the slack ALBO method outperforms modern alternatives in several challenging constrained optimization problems.
We conclude by remarking on a potential drawback: we have found in some cases that our slack variable approach is a doubleedged sword, especially when the unknown slacks are chosen in a default manner, i.e., as in Section 3.3. Those choices utilize the surrogate(s) of the constraints, particularly the posterior mean, which means that those surrogates are relied upon more heavily than in the original (nonslack) AL context. Consequently, if the problem at hand matches the surrogate modeling (i.e., Gaussian process) assumptions well, then by a leveraging argument we can expect the slack method to do better than the original. However, if the assumptions are a mismatch, then one can expect them to be worse. The motivating “Lockwood” problem from Gramacy et al. (2016a) had a constraint surface that was kinked at its boundary , which obviously violates the smoothness assumptions of typical GP surrogates. We were therefore not surprised to find poorer performance in our slack method compared to the original.
Acknowledgments
We are grateful to Mickael Binois for comments on early drafts. RBG is grateful for partial support from National Science Foundation grant DMS1521702. The work of SMW is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Contract No. DEAC0206CH11357. The work of SLD is supported by the Natural Sciences and Engineering Research Council of Canada grant 418250.
Appendix A Optimal and default slack variables
This continues the discussion from Section 3.3, which suggests choosing slack variables by minimizing the slackAL. One alternative is to maximize the EI. Recall that EI mixes data quantities (at the ) and predictive quantities at candidate locations via the improvement , the former coming from , where . If one seeks improvement over the lowest possible , then the optimal values to pair with the s are given in Eq. (14): use . Choosing slacks for candidate locations is more challenging. Solving for a fixed is equivalent to solving , for which a closedform solution remains elusive. Numerical methods are an option; however, we have not discovered any advantage over the simpler based settings described above. Empirically, we find that the two criteria either yield identical values, or ones that are nearly so.
As an illustration, we consider a 2d problem with a linear objective and two inequality constraints (see the LSQ problem in Section D, Eq. (15), below). We focus here on a “static” situation, where a 9point grid defines the set of observations. is treated as a known function and two GP models are fitted to and . The AL parameters are set to and .
The topleft panel of Figure 4 shows the EI surface (computed on a 2d grid), when is used for each grid element . The shape of the contours is typical of EI, with a plateau of zero values and two local maxima. The remaining panels show the value of the EI metric for each of the three values shown in the topleft panel. These are computed on a 2d grid in space. Despite its complex formulation (13), on this example EI is apparently unimodal with respect to and . Moreover, the optimal values (computed from the grid used to plot the image) coincide (up to numerical error) with the associated values, plotted as diamonds matching the color in the topleft plot.
Appendix B Implementation notes
b.1 Initializing the AL parameters in the slack setup
To initialize , we update the settings suggested in Gramacy et al. (2016b)—balancing the scales of objective and constraint in the AL on the initial design of size —to our new mixed constraint context. Let be a logical vector of length recording the validity of in a zeroslack setting. That is, let if , for , and if , for ; otherwise let . Then, take
and . The denominator above is not defined if the initial design has no valid values (i.e., if there is no with for all ). When that happens we use the median of in the denominator instead. On the other hand, if the initial design has no invalid values and hence the numerator is not defined, we use .
b.2 Open source software
Code supporting all methods described here is provided in two open source R packages: laGP (Gramacy, 2015) and DiceOptim (Ginsbourger et al., 2015) packages, both on CRAN. Implementation details vary somewhat across those packages, due primarily to particulars of their surrogate modeling capability and how they search the EI surface. For example laGP can accommodate a smaller initial sample size because it learns fewer parameters (i.e., has fewer degrees of freedom). DiceOptim uses a multistart search procedure for EI, whereas laGP deploys a random candidate grid, which may optionally be “finished” with an LBFGSB search. Nevertheless, their qualitative behavior exhibits strong similarity. Both utilize library subroutines for dealing with weighted noncentral chisquare deviates, as described in Appendix C below. Some notable similarities and differences are described below. The empirical work we present in Section 4 provides results from our laGP only. Those obtained from DiceOptim are similar; we encourage readers to consult the documentation from both packages for further illustration.^{2}^{2}2At the time of writing, updated versions of laGP and DiceOptim are not “pushed” to CRAN; alpha testing versions are available from the authors upon request. Subsequent CRAN updates will include all slack AL enhancements made for this paper.
Both initialize and as described above in Section B.1 above; both update those parameters according to the modifications of steps 2–4 in Alg. 1; and both define in those updates to be the value of corresponding to the best using , for data values indexed by . By default, both utilize to evaluate EI at candidate locations, via the surrogate mean values. The laGP implementation optionally allows randomization of , where is chosen via (14) with a draw in place of . Alternatively, DiceKriging can optimize jointly over via its multistart search scheme. Those two options are variations which allow one to test the “optimality” of under EI calculations. We report here that such approaches have been found to be uniformly inferior to using , which is anyways a much simpler implementation choice.
Choosing , as opposed to random above, renders the entire EI calculation for a candidate value deterministic. In turn, that means that local (numerical) derivativebased optimization techniques, such as LBFGSB (Liu and Nocedal, 1989), can be used to optimize over EI settings. The laGP package has an optional “optim” setting, initializing local LBFGSB searches from the best random candidate point via. DiceKriging uses the genoud algorithm (GENetic Optimization Using Derivatives, Mebane Jr and Sekhon, 2011)to handle potentially multimodal EI surfaces (Figure 4). Neither of these options is available for the original AL (without slacks), since evaluating the EI required Monte Carlo, effectively limiting the resolution of search. In later iterations, the original AL would fail to “drill” down into local troughs of the EI surface. In our empirical work [Section 4], we show that being able to optimize over the EI surface is crucial to obtaining good performance at later iterations. The predictive entropy search adaptation for constrained optimization (PESC; HernándezLobato et al., 2015) is similarly deterministic, and also utilizes LBFGSB to optimize over selections. We conjecture that the correspondingly suboptimal selections made by the original AL, and the poor initialization of updates of pointed out by Picheny et al. (2016), explains the ALversusPESC outcome reported by HernándezLobato et al. (2015). Our revised experiments show a reversed (if extremely close) comparison.
Finally, we find that the following simpleyetveryefficient numerical trick can be helpful when the EI surface is mostly zero, as often happens at later stages of optimization—a behavior that can be attributed to the quadratic nature of the AL, guaranteeing zero improvement in certain situations (Gramacy et al., 2016a). In the original laGP implementation this drawback was addressed by “switching” to meanbased search (rather than EI). In the slack variable formulation, observe that zero EI may be realized when , in which case the value itself (as it is less than zero) may stand in as a sensible replacement: capturing both mean and (deterministic aspects) of the penalty information. For example, when optimizing over EI in DiceOptim we find that this replacement helps the solver escape large plateaus of (otherwise) zero values.
b.3 Adapted NOMAD comparators
The three methods NOMADP1, NOMADALP1, and NOMADALPBP1 are based on the NOMAD software Le Digabel (2011) that implements the Mesh Adaptive Direct Search derivativefree algorithm Audet and Dennis, Jr. (2006) using the Progressive Barrier (PB) technique Audet and Dennis, Jr. (2009) for inequality constraints. All three methods begin with a first phase (P1) that focuses on obtaining a feasible solution, with the execution of NOMAD on the minimization of subject to . Then, NOMADP1 follows with an ordinary NOMAD run where equalities are transformed into inequalities of the form . NOMADALP1 is a straightforward extension of the augmented Lagrangian method of Gramacy et al. (2016a), with the inclusion of equality constraints in the Lagrangian function, while the unconstrained subproblem is handled by NOMAD. NOMADALPBP1 is a hybrid version where equalities are still in the Lagrangian while inequalities are transferred into the subproblem where NOMAD treats them with the PB. Note that the nonuse of the first phase P1 was also tested, but it did not work as well as the others so it was not included in order to save space.
Appendix C Code for calculations with WSNC RVs
Here we provide R code for calculating EI and related quantities using the sadists (Pav, 2015) and CompQuadForm (Lafaye de Micheaux, 2013) libraries, in particular their WNCS distribution functions (e.g., Davies, 1980; Duchesne and Lafaye de Micheaux, 2010).
Let mz and sz denote the vectorized
means and variances of the normal quantity in (12), that is
mx[j]
. Below, rho contains the AL
parameter , and we presume , , and quantities
have been precalculated and stored in variables of the same name. Then, the
relevant quantities for calculating EI in the case of a known can be
computed as:
R> ncp < (mz / sz)^2 R> wmin < 2*rho*(ymin  fx  gs)
Using sadists, EI may be computed as follows using simple “quadrature”.
R> library(sadists) R> m < length(mz) R> lt < 1000 R> t < seq(0, wmin, length=lt) R> df < pow < rep(1, mz) R> ncp < (mx / sx)^2 R> EI < (sum(psumchisqpow(q=t, sz^2, df, ncp, pow))*wmin/(lt1)) / (2*rho)
Alternately, via the integrate function builtinto R:
R> EIgrand < function(t) { psumchisqpow(q=t, wts, df, ncp, pow) } R> EI < integrate(EIgrand, 0, wmin)$value / (2*rho)
The code using CompQuadForm is similar. Greater care is required to code the EI integrand, as the main function (davies) is not vectorized. Also, we have found that NaN is often erroneously returned when the integrand is actually zero.
library(CompQuadForm) R> EIgrand < function(x) { + p < rep(NA, length(x)) + for(1 in 1:length(c)) + p[i] < 1  davies(q=x[i], lambda=sz^2, delta=ncp)$Qq + p[!is.finite(p)] < 0 + return(p) } R> EI < (sum(EIgrand(t)*wmin/(lt1)) / (2*rho) R> EI < integrate(EIgrand, 0, wmin)$value / (2*rho) ## alternately
Although this may at first seem more cumbersome, it is actually fairly easy to vectorize davies, and at the same time replace NaN values with zero in the underlying C implementation. The result is a method which is much faster than the sadists alternative, and has the added bonus of being callable from the C functions inside our laGP implementation. For more details on this reimplementation of davies in our setup, please see the source for the laGP and/or DiceOptim package(s).
Some slight modification is required for the unknown objective case. Let mf and sf denote and in the code below. We have concluded that only the methods from CompQuadForm are applicable in this case.
R> alpha < 2*rho*lambda + 2*s R> wmin < 2*rho*ymin  sum(s^2)  s %*% t(lambda) + sum(alpha^2/4) t < seq(6*rho*sf, wmin, length=lt) R> EIgrand < function(x) { + p < rep(NA, length(x)) + madj < 2*rho*mf + sadj < 2*rho*sf + for(1 in 1:length(c)) + p[i] < davies(q=x[i]madj, lambda=sz^2, delta=ncp, sigma=sadj)$Qq + p < 1p + p[!is.finite(p)] < 0 + return(p) } R> EI < (sum(EIgrand(t))*(wmin+6*rho*s)/(lt1)) / (2*rho) R> EI < integrate(EIgrand, 6*rho*sf, wmin)$value / (2*rho) ## alternately
As when is known, vectorizing the davies routine leads to dramatic speedups.
Appendix D Test problems
Below we detail the components of the three test problems explored in Section 4. They combine classic benchmarks from the Bayesian optimization and computer (surrogate) modeling literature. In total there are two objective functions and six constraints.
Objective functions: is a simple linear objective, treated as known, and is the GoldsteinPrice function (rescaled and centered) Dixon and Szego (1978); Molga and Smutnicki (2005); Picheny et al. (2013). Further description, as well as R and MATLAB implementations, is provided by Bingham et al. (2014).
Constraint functions: and are the “toy” constraints from Gramacy et al. (2016a); is the “Branin” function (centered) “Branin” function (Dixon and Szego, 1978; Forrester et al., 2008; Molga and Smutnicki, 2005; Picheny et al., 2013); is taken from Parr et al. (2012). is the “Ackley” function (centered) (for details see Adorio and Diliman, 2013; Molga and Smutnicki, 2005; Bäck, 1996); the “Hartman” function (centered, rescaled) (see Dixon and Szego, 1978; Picheny et al., 2013, for details). Again Surjanovic and Bingham (2014) provides a convenient onestop reference containing R and MATLAB code and visualizations.
with:
Problems:
(LSQ)  (15)  
(GSBP)  (16)  
(LAH)  (17) 
Figure 5 shows the two dimensional problems.
References
 Adorio and Diliman (2013) Adorio, E. and Diliman, U. (2013). “MVF: Multivariate Test Functions Library in C for Unconstrained Global Optimization.” Retrieved June 2013, from http://www.geocities.ws/eadorio/mvf.pdf.
 Audet and Dennis, Jr. (2006) Audet, C. and Dennis, Jr., J. (2006). “Mesh Adaptive Direct Search Algorithms for Constrained Optimization.” SIAM Journal on Optimization, 17, 1, 188–217.
 Audet and Dennis, Jr. (2009) — (2009). “A Progressive Barrier for DerivativeFree Nonlinear Programming.” SIAM Journal on Optimization, 20, 1, 445–472.
 Bäck (1996) Bäck, T. (1996). Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford University Press.
 Bertsekas (1982) Bertsekas (1982). Constrained Optimization and Lagrange Multiplier Methods. New York, NY: Academic Press.
 Bingham et al. (2014) Bingham, D., Ranjan, P., and Welch, W. (2014). “Sequential design of computer experiments for optimization, estimating contours, and related objectives.” In Statistics in Action: A Canadian Outlook, ed. J. Lawless, 109–124. Chapman & Hall.
 Booker et al. (1999) Booker, A. J., Dennis, Jr., J. E., Frank, P. D., Serafani, D. B., Torczon, V., and Trosset, M. W. (1999). “A Rigorous Framework for Optimization of Expensive Functions by Surrogates.” Structural Optimization, 17, 1–13.
 Box and Draper (1987) Box, G. E. P. and Draper, N. R. (1987). Empirical Model Building and Response Surfaces. Oxford: Wiley.
 Boyle (2007) Boyle, P. (2007). “Gaussian Processes for Regression and Optimization.” Ph.D. thesis, Victoria University of Wellington.
 Brochu et al. (2010) Brochu, E., Cora, V. M., and de Freitas, N. (2010). “A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning.” Tech. rep., University of British Columbia. ArXiv:1012.2599v1.
 Bull (2011) Bull, A. D. (2011). “Convergence Rates of Efficient Global Optimization Algorithms.” Journal of Machine Learning Research, 12, 2879–2904.
 Davies (1980) Davies, R. (1980). “Algorithm AS 155: The Distribution of a Linear Combination of chi2 Random Variables.” Journal of the Royal Statistical Society. Series C (Applied Statistics), 29, 3, 323–333.
 Dixon and Szego (1978) Dixon, L. and Szego, G. (1978). “The global optimization problem: an introduction.” Towards global optimization, 2, 1–15.
 Duchesne and Lafaye de Micheaux (2010) Duchesne, P. and Lafaye de Micheaux, P. (2010). “Computing the distribution of quadratic forms: Further comparisons between the LiuTangZhang approximation and exact methods.” Computational Statistics and Data Analysis, 54, 858–862.
 Forrester et al. (2008) Forrester, A., Sobester, A., and Keane, A. (2008). Engineering design via surrogate modelling: a practical guide. Wiley.
 Gardner et al. (2014) Gardner, J. R., Kusner, M. J., Xu, Z., Weinberger, K. W., and Cunningham, J. P. (2014). “Bayesian Optimization with Inequality Constraints.” In Proceedings of the 31st International Conference on Machine Learning, vol. 32. JMLR, W&CP.

Gelbart et al. (2014)
Gelbart, M. A., Snoek, J., and Adams, R. P. (2014).
“Bayesian optimization with unknown constraints.”
In
Uncertainty in Artificial Intelligence (UAI)
.  Ginsbourger et al. (2015) Ginsbourger, D., Picheny, V., Roustant, O., with contributions by C. Chevalier, Marmin, S., and Wagner, T. (2015). DiceOptim: KrigingBased Optimization for Computer Experiments. R package version 1.5.
 Gramacy (2015) Gramacy, R. (2015). “laGP: LargeScale Spatial Modeling via Local Approximate Gaussian Processes in R.” Journal of Statistical Software, to appear. Available as a vignette in the laGP package.
 Gramacy et al. (2016a) Gramacy, R., Gray, G., Le Digabel, S., Lee, H., Ranjan, P., Wells, G., and Wild, S. (2016a). “Modeling an Augmented Lagrangian for Blackbox Constrained Optimization.” Technometrics, 58, 1–11.
 Gramacy et al. (2016b) — (2016b). “Rejoinder to Modeling an Augmented Lagrangian for Blackbox Constrained Optimization.” Technometrics, 58, 1, 26–29.
 Gramacy and Lee (2011) Gramacy, R. B. and Lee, H. K. H. (2011). “Optimization under Unknown Constraints.” In Bayesian Statistics 9, eds. J. Bernardo, S. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith, and M. West, 229–256. Oxford University Press.
 HernándezLobato et al. (2015) HernándezLobato, J. M., Gelbart, M. A., Hoffman, M. W., Adams, R. P., and Ghahramani, Z. (2015). “Predictive Entropy Search for Bayesian Optimization with Unknown Constraints.” In Proceedings of the 32nd International Conference on Machine Learning, vol. 37. JMLR, W&CP.
 Johnson (2014) Johnson, S. G. (2014). “The NLopt nonlinearoptimization package.” Via the R package nloptr.
 Jones et al. (1998) Jones, D. R., Schonlau, M., and Welch, W. J. (1998). “Efficient Global Optimization of Expensive Black Box Functions.” J. of Global Optimization, 13, 455–492.
 Lafaye de Micheaux (2013) Lafaye de Micheaux, P. (2013). CompQuadForm: Distribution function of quadratic forms in normal variables. R package version 1.41.
 Le Digabel (2011) Le Digabel, S. (2011). “Algorithm 909: NOMAD: Nonlinear Optimization with the MADS algorithm.” ACM Transactions on Mathematical Software, 37, 4, 44:1–44:15.
 Liu and Nocedal (1989) Liu, D. C. and Nocedal, J. (1989). “On the limited memory BFGS method for large scale optimization.” Mathematical Programming, 45, 13, 503–528.
 Mebane Jr and Sekhon (2011) Mebane Jr, W. R. and Sekhon, J. S. (2011). “Genetic optimization using derivatives: the rgenoud package for R.” Journal of Statistical Software, 42, 11, 1–26.
 Mockus (1989) Mockus, J. (1989). Bayesian Approach to Global Optimization: Theory and Applications. Springer.
 Mockus (1994) — (1994). “Application of Bayesian Approach to Numerical Methods of Global and Stochastic Optimization.” J. of Global Optimization, 4, 347–365.
 Mockus et al. (1978) Mockus, J., Tiesis, V., and Zilinskas, A. (1978). “The Application of Bayesian Methods for Seeking the Extremum.” Towards Global Optimization, 2, 117129, 2.
 Molga and Smutnicki (2005) Molga, M. and Smutnicki, C. (2005). “Test functions for optimization needs.” Retrieved June 2013, from http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf.
 Nocedal and Wright (2006) Nocedal, J. and Wright, S. J. (2006). Numerical Optimization. 2nd ed. Springer.
 Parr et al. (2012) Parr, J., Keane, A., Forrester, A., and Holden, C. (2012). “Infill sampling criteria for surrogatebased optimization with constraint handling.” Engineering Optimization, 44, 1147–1166.
 Pav (2015) Pav, S. E. (2015). sadists: Some Additional Distributions. R package version 0.2.1.
 Picheny (2014) Picheny, V. (2014). “A stepwise uncertainty reduction approach to constrained global optimization.” In Proceedings of the 7th International Conference on Artificial Intelligence and Statistics, vol. 33, 787–795. JMLR W&CP.
 Picheny et al. (2016) Picheny, V., Ginsbourger, D., and Krityakierne, T. (2016). “Comment: Some Enhancements Over the Augmented Lagrangian Approach.” Technometrics, 58, 1, 17–21.
 Picheny et al. (2013) Picheny, V., Wagner, T., and Ginsbourger, D. (2013). “A benchmark of krigingbased infill criteria for noisy optimization.” Structural and Multidisciplinary Optimization, 48, 3, 607–626.
 Rasmussen and Williams (2006) Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. The MIT Press.
 Santner et al. (2003) Santner, T. J., Williams, B. J., and Notz, W. I. (2003). The Design and Analysis of Computer Experiments. New York: SpringerVerlag.
 Sasena (2002) Sasena, M. J. (2002). “Flexibility and Efficiency Enhancement for Constrained Global Design Optimization with Kriging Approximations.” Ph.D. thesis, University of Michigan.
 Schonlau et al. (1998) Schonlau, M., Welch, W. J., and Jones, D. R. (1998). “Global versus local search in constrained optimization of computer models.” Lecture NotesMonograph Series, 11–25.
 R Development Core Team (2004) R Development Core Team (2004). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Aus. ISBN 3900051003.
 Snoek et al. (2012) Snoek, J., Larochelle, H., Snoek, J., and Adams, R. P. (2012). “Bayesian optimization of machine learning algorithms.” In Neural Information Processing Systems (NIPS).
 Surjanovic and Bingham (2014) Surjanovic, S. and Bingham, D. (2014). “Virtual Library of Simulation Experiments: Test Functions and Datasets.” Retrieved December 4, 2014, from http://www.sfu.ca/~ssurjano.
 Williams et al. (2010) Williams, B. J., Santner, T. J., Notz, W. I., and Lehman, J. S. (2010). “Sequential Design of Computer Experiments for Constrained Optimization.” In Statistical Modeling and Regression Structures, eds. T. Kneib and G. Tutz, 449–472. SpringerVerlag.