1 Introduction
The global optimization of highdimensional blackbox functions—where closed form expressions and derivatives are unavailable—is a ubiquitous task arising in hyperparameter tuning
Snoek et al. (2012); in reinforcement learning, when searching for an optimal parametrized policy (Calandra et al., 2016); in simulation, when calibrating a simulator to real world data; and in chemical engineering and materials discovery, when selecting candidates for highthroughput screening (HernándezLobato et al., 2017). While Bayesian optimization (BO) has emerged as a highly competitive tool for problems with a small number of tunable parameters (e.g., see (Frazier, 2018; Shahriari et al., 2016)), it often scales poorly to high dimensions and large sample budgets. Several methods have been proposed for highdimensional problems with small budgets of a few hundred samples (see the literature review below). However, these methods make strong assumptions about the objective function such as lowdimensional subspace structure. The recent algorithms of Wang et al. (2018) and HernándezLobato et al. (2017) are explicitly designed for a large sample budget and do not make these assumptions. However, they do not compare favorably with stateoftheart methods from stochastic optimization like CMAES (Hansen, 2006) in practice.The optimization of highdimensional problems is hard for several reasons. First, the search space grows exponentially with the dimension, and while local optima may become more plentiful, global optima become more difficult to find. Second, the function is often heterogeneous, making the task of fitting a global surrogate model challenging. For example, in reinforcement learning problems with sparse rewards, we expect the objective function to be nearly constant in large parts of the search space. For the latter, note that the commonly used global Gaussian process (GP) models (Frazier, 2018; Williams and Rasmussen, 2006)
implicitly suppose that characteristic lengthscales and signal variances of the function are constant in the search space. Previous work on nonstationary kernels does not make this assumption, but these approaches are too computationally expensive to be applicable in our largescale setting
Snoek et al. (2014); Taddy et al. (2009); Assael et al. (2014). Finally, the fact that search spaces grow considerably faster than sampling budgets due to the curse of dimensionality implies the inherent presence of regions with large posterior uncertainty. For common myopic acquisition functions, this results in an overemphasized exploration and a failure to exploit promising areas.
To overcome these challenges, we adopt a local strategy for BO. We introduce trust region BO (TuRBO), a technique for global optimization, that uses a collection of simultaneous local
optimization runs using independent probabilistic models. Each local surrogate model enjoys the typical benefits of Bayesian modeling —robustness to noisy observations and rigorous uncertainty estimates— however, these local surrogates allow for heterogeneous modeling of the objective function and do not suffer from overexploration. To optimize globally, we leverage an implicit multiarmed bandit strategy at each iteration to allocate samples between these local areas and thus decide which local optimization runs to continue.
We provide a comprehensive experimental evaluation demonstrating that TuRBO outperforms the stateoftheart from BO, evolutionary methods, simulation optimization, and stochastic optimization on a variety of benchmarks that span from reinforcement learning to robotics and natural sciences. An implementation of TuRBO will be made available upon publication.
1.1 Related work
BO has recently become the premier technique for global optimization of expensive functions, with applications in hyperparameter tuning, aerospace design, chemical engineering, and materials discovery; see (Frazier, 2018; Shahriari et al., 2016) for an overview. However, most of BO’s successes have been on lowdimensional problems and small sample budgets. This is not for a lack of trying; there have been many attempts to scale BO to more dimensions and observations. A common approach is to replace the GP model: Hutter et al. (2011)
uses random forests, whereas
Snoek et al. (2015)applies Bayesian linear regression on features from neural networks. This neural network approach was refined by
Springenberg et al. (2016) whose BOHAMIANN algorithm uses a modified Hamiltonian Monte Carlo method, which is more robust and scalable than standard Bayesian neural networks. HernándezLobato et al. (2017)combines Bayesian neural networks with Thompson sampling (
TS), which easily scales to large batch sizes. We will return to this acquisition function later.There is a considerable body of work in highdimensional BO (Chen et al., 2012; Kandasamy et al., 2015; Binois et al., 2015; Wang et al., 2016b; Gardner et al., 2017; Wang et al., 2018; Rolland et al., 2018; Mutny and Krause, 2018; Munteanu et al., 2019; Binois et al., 2019). Many methods exist that exploit potential additive structure in the objective function (Kandasamy et al., 2015; Gardner et al., 2017; Wang et al., 2018). These methods typically rely on training a large number of GPs (corresponding to different additive structures) and therefore do not scale to large evaluation budgets. Other methods exist that rely on a mapping between the highdimensional space and an unknown lowdimensional subspace to scale to large numbers of observations (Wang et al., 2016b; Munteanu et al., 2019; Garnett et al., 2013). The BOCK algorithm of Oh et al. (2018) uses a cylindrical transformation of the search space to achieve scalability to high dimensions. Ensemble Bayesian optimization (EBO) (Wang et al., 2018) uses an ensemble of additive GPs together with a batch acquisition function to scale BO to tens of thousands of observations and highdimensional spaces. Recently, Munteanu et al. (2019) have proposed the general HeSBO framework that extends GPbased BO algorithms to highdimensional problems using a novel subspace embedding that overcomes the limitations of the Gaussian projections used in (Wang et al., 2016b; Binois et al., 2015, 2019). From this area of research, we compare to BOCK, BOHAMIANN, EBO, and HeSBO.
To acquire large numbers of observations, largescale BO usually selects points in batches to be evaluated in parallel. While several batch acquisition functions have recently been proposed (Chevalier and Ginsbourger, 2013; Shah and Ghahramani, 2015; Wang et al., 2016a; Wu and Frazier, 2016; Wu et al., 2017; Marmin et al., 2015; González et al., 2016), these approaches do not scale to large batch sizes in practice. TS (Thompson, 1933) is particularly lightweight and easy to implement as a batch acquisition function as the computational cost scales linearly with the batch size. Although originally developed for bandit problems (Russo et al., 2018), it has recently shown its value in BO (HernándezLobato et al., 2017; Baptista and Poloczek, 2018; Kandasamy et al., 2018). In practice, TS is usually implemented by drawing a realization of the unknown objective function from the surrogate model’s posterior on a discretized search space. Then, TS finds the optimum of the realization and evaluates the objective function at that location. This technique is easily extended to batches by drawing multiple realizations as (see the supplementary material for details).
Evolutionary algorithms are a popular approach for optimizing blackbox functions when thousands of evaluations are available, see Jin et al. (2005) for an overview in stochastic settings. We compare to the successful covariance matrix adaptation evolution strategy (CMAES) of Hansen (2006). CMAES performs a stochastic search and maintains a multivariate normal sampling distribution over the search space. The evolutionary techniques of recombination and mutation correspond to adaptions of the mean and covariance matrix of that distribution.
Highdimensional problems with large sample budgets have also been studied extensively in operations research and simulation optimization, see (Dong et al., 2017b) for a survey. Here the successful trust region (TR) methods are based on a local surrogate model in a region (often a sphere) around the best solution. The trust region is expanded or shrunk depending on the improvement in obtained solutions; see Yuan (2000) for an overview. We compare to BOBYQA (Powell, 2007), a stateoftheart TR method that uses a quadratic approximation of the objective function. We also include the NelderMead (NM) algorithm (Nelder and Mead, 1965). For a dimensional space, NM creates a dimensional simplex that adaptively moves along the surface by projecting the vertex of the worst function value through the center of the simplex spanned by the remaining vertices. Finally, we also consider the popular quasiNewton method BFGS (Zhu et al., 1997), where gradients are obtained using finite differences. For other work that uses local surrogate models, see e.g., Krityakierne and Ginsbourger (2015); Wabersich and Toussaint (2016); Acerbi and Ji (2017); Akrour et al. (2017); McLeod et al. (2018).
2 The trust region Bayesian optimization algorithm
In this section, we propose an algorithm for optimizing highdimensional blackbox functions. In particular, suppose that we wish to solve:
where and . We observe potentially noisy values , where . BO relies on the ability to construct a global model that is eventually accurate enough to uncover a global optimizer. As discussed previously, this is challenging due to the curse of dimensionality and the heterogeneity of the function. To address these challenges, we propose to abandon global surrogate modeling, and achieve global optimization by maintaining several independent local models, each involved in a separate local optimization run. To achieve global optimization in this framework, we maintain multiple local models simultaneously and allocate samples via an implicit multiarmed bandit approach. This yields an efficient acquisition strategy that directs samples towards promising local optimization runs. We begin by detailing a single local optimization run, and then discuss how multiple runs are managed.
Local modeling.
To achieve principled local optimization in the gradientfree setting, we draw inspiration from a class of TR methods from stochastic optimization (Yuan, 2000). These methods make suggestions using a (simple) surrogate model inside a TR. The region is often a sphere or a polytope centered at the best solution, within which the surrogate model is believed to accurately model the function. For example, the popular COBYLA (Powell, 1994) method approximates the objective function using a local linear model. Intuitively, while linear and quadratic surrogates are likely to be inadequate models globally, they can be accurate in a sufficiently small TR. However, there are two challenges with traditional TR methods. First, deterministic examples such as COBYLA are notorious for handling noisy observations poorly. Second, simple surrogate models might require overly small trust regions to provide accurate modeling behavior. Therefore, we will use GP surrogate models within a TR. This allows us to inherit the robustness to noise and rigorous reasoning about uncertainty that global BO enjoys.
Trust regions.
We choose our TR to be a hypercube of side length . The TR is always centered at the best solution found so far, denoted by . In the noisefree case, we set to the location of the best observation so far. In the presence of noise, we use the observation with the smallest posterior mean under the surrogate model. At the beginning of a given local optimization run, we initialize the TR to have side length . To perform a single local optimization run, we utilize an acquisition function at each iteration to select a batch of candidates , restricted to be within the TR.
If the side length was large enough to contain the whole space, this would be equivalent to running standard global BO. Therefore, the evolution of the side length is critical. On the one hand, a TR should be sufficiently large to contain good solutions. On the other hand, it should be small enough to ensure that the local model is accurate within the TR. The typical behavior is to expand a TR when the optimizer “makes progress”, i.e., it finds better solutions in that region, and shrink it when the optimizer appears stuck. Therefore, following, e.g., Nelder and Mead (1965), we will shrink a TR after too many consecutive “failures”, and expand it after many consecutive “successes”. We define a “success” as a candidate that improves upon , and a “failure” as a candidate that does not. After consecutive successes, we double the size of the TR, i.e., . After consecutive failures, we halve the size of the TR: . We reset the success and failure counters to zero after we change the size of the TR. Whenever falls below a given minimum threshold , we discard the respective TR and initialize a new one with side length . , , , , and are hyperparameters of TuRBO; see the supplementary material for the values used in the experimental evaluation.
Trust region Bayesian optimization.
So far, we have detailed a single local BO strategy using a TR method. Intuitively, we could make this algorithm (more) global by random restarts. However, from a probabilistic perspective, this is likely to utilize our evaluation budget inefficiently. Just as we reason about which candidates are most promising within a local optimization run, we can reason about which local optimization run is “most promising.”
Therefore, TuRBO maintains trust regions simultaneously. Each trust region with is a hypercube of side length , and utilizes an independent local GP model. This gives rise to a classical exploitationexploration tradeoff that we model by a multiarmed bandit that treats each TR as a lever. Note that this provides an advantage over traditional TR algorithms in that TuRBO may never explore unpromising trust regions.
In each iteration, we need to select a batch of candidates drawn from the union of all trust regions, and update all local optimization problems for which candidates were drawn. To solve this problem, we find that TS provides a principled solution to both the problem of selecting candidates within a single TR, and selecting candidates across the set of trust regions simultaneously. To select the th candidate from across the trust regions, we draw a realization of the posterior function from the local GP within each TR: , where is the GP posterior for at iteration . We then select the th candidate that minimizes the function value across all samples and all trust regions:
That is, we select the point with the smallest function value after concatenating a Thompson sample from each TR for . We refer to the supplementary material for additional details.
3 Numerical experiments
In this section, we evaluate TuRBO on a wide range of problems: a 14D robot pushing problem, a 60D rover trajectory planning problem, a 12D cosmological constant estimation problem, a 12D lunar landing reinforcement learning problem, and a 200D synthetic problem. All problems are multimodal and challenging for many global optimization algorithms. We consider a variety of batch sizes and evaluation budgets to fully examine the performance and robustness of TuRBO. The values of , , , , and are described in the supplementary material.
We compare TuRBO to a comprehensive selection of stateoftheart baselines: BFGS, BOCK, BOHAMIANN, CMAES, BOBYQA, EBO, GPTS, HeSBOTS, NelderMead (NM), and random search (RS). Here, GPTS refers to TS with a global GP model using the Matérn kernel. HeSBOTS combines GPTS with a subspace embedding and thus effectively optimizes in a lowdimensional space. Therefore, a small sample budget may suffice, which allows to run invocations in parallel, following (Wang et al., 2016b)
. This may improve the performance, since each embedding may "fail" with some probability
(Munteanu et al., 2019), i.e., it does not contain an active subspace even if it exists. Note that HeSBOTS recommends a point of optimal posterior mean among the GPmodels in each step that is used for the evaluation. The standard acquisition criterion EI used in BOCK and BOHAMIANN is replaced by (batch) TS, i.e., all methods use the same criterion which allows for a direct comparison. Methods that attempt to learn an additive decomposition lack scalability and are thus omitted. BFGS approximates the gradient via finite differences and thus requires evaluations for each step. Furthermore, NM, BFGS, and BOBYQA are inherently sequential and therefore have an edge by leveraging all gathered observations. However, they are considerably more time consuming on a perwalltime evaluation basis since we are working with large batches.We supplement the optimization test problems with three additional experiments: one that shows that TuRBO achieves a linear speedup from large batch sizes, an example where local GPs are more accurate than global GPs, and an analytical experiment demonstrating the locality of TuRBO
. Performance plots show the mean performances with one standard error. Overall, we observe that
TuRBO consistently finds excellent solutions, outperforming the other methods on most problems. Experimental results for a small budget experiment on four synthetic functions are shown in the supplementary material; we also provide more details on the experimental setup and runtimes for all algorithms.3.1 Robot pushing
The robot pushing problem is a noisy 14D control problem considered in Wang et al. (2018). We run each method for a total of 10K evaluations and batch size of . TuRBO1 and all other methods are initialized with 150 points except for TuRBO30 where we use 30 initial points for each trust region. This is to avoid having TuRBO30 consume its full evaluation budget on the initial points. We use HeSBOTS with a target dimensionality of 8. TuRBO denotes the variant of TuRBO that maintains local models in parallel. Fig. 2 shows the results: TuRBO1 and TuRBO30 outperform the alternatives. TuRBO30 starts slower since it is initialized with 1K points, but makes progress much faster than TuRBO1 by strategically sampling the most promising local regions. CMAES and BOBYQA outperform the other BO methods by a large margin. Note that Wang et al. (2018) reported a median value of for EBO after 30K evaluations, while TuRBO1 achieves a mean and median reward of around after only 2K samples.
3.2 Rover trajectory planning
Here the goal is to optimize the location of 30 points in the 2Dplane that determines the trajectory of a rover (Wang et al., 2018). Every algorithm is run for 200 steps with a batch size of , thus collecting a total of 20K evaluations. We use 200 initial points for all methods except for TuRBO30, where we use 100 initial points for each region. Fig. 2 summarizes the performance.
We observe that TuRBO1 and TuRBO30 outperform all other algorithms after a few thousand evaluations. TuRBO30 starts slowly because of the initial 3K random evaluations, but eventually outperforms TuRBO1. Wang et al. (2018) reported a mean value of for EBO after 35K evaluations, while TuRBO1 achieves a mean and median reward of about after only 1K evaluations. We use a target dimensionality of for HeSBOTS.
3.3 Cosmological constant learning
In the “cosmological constants” problem the task is to calibrate a physics simulator^{2}^{2}2https://lambda.gsfc.nasa.gov/toolbox/lrgdr/ to observed data. The parameters to tune include various physical constants like the density of certain types of matter and Hubble’s constant. In this paper, we use a more challenging version of the problem in (Kandasamy et al., 2015) by tuning 12 parameters rather than 9, and by using substantially larger parameter bounds. We used 2K evaluations, a batch size of , and 50 initial points. TuRBO6 uses 25 initial points for each local model. HeSBOTS uses a target dimensionality of 8. Fig. 3 (left) shows the results, with TuRBO6 performing the best, followed by NM and BOCK.
3.4 Lunar landing reinforcement learning
Here the goal is to learn a controller for a lunar lander implemented in the OpenAI gym^{3}^{3}3https://gym.openai.com/envs/LunarLanderv2. The state space for the lunar lander is the position, angle, time derivatives, and whether or not either leg is in contact with the ground. There are four possible action for each frame, each corresponding to firing a booster engine left, right, up, or doing nothing. The objective is to maximize the average final reward over a fixed constant set of 50 randomly generated terrains, initial positions, and velocities. We observed that the simulation can be sensitive to even tiny perturbations. Fig. 3 shows the results for a total of 1500 function evaluations, batch size , and 50 initial points for all algorithms. For this problem, we use HeSBOTS in an 8dimensional subspace. TuRBO6 learns the best controllers; and in particular achieves better rewards than the handcrafted controller provided by OpenAI whose performance is depicted by the blue horizontal line. TuRBO1 generally finds better controllers than EBO, but sometimes gets stuck in bad local optima, which deteriorates the mean performance and demonstrates the importance of allocating samples across multiple trust regions.
3.5 The 200dimensional Ackley function
We examine performances on the 200dimensional Ackley function in the domain . This function is challenging as it has no redundant dimensions. Previous work on highdimensional BO has mostly considered a lowdimensional functions embedded in a highdimensional space. We only consider TuRBO1 because of the large number of dimensions where there may not be a benefit from using multiple TRs. EBO is excluded from the plot since its computation time exceeded 30 days per replication. HeSBOTS uses a target dimension of 20. Fig. 4 shows the results for a total of 10K function evaluations, batch size , and 200 initial points for all algorithms.
HeSBOTS, with a target dimension of 20, and BOBYQA perform well initially, but are eventually outperformed by TuRBO1. BO methods that use a global GP model overemphasize exploration and make little progress.
3.6 The advantage of local models over global models
We investigate the performance of local and global GP models on the 14D robot pushing problem from §3.1
. We replicate the conditions from the optimization experiments as closely as possible for a regression experiment, including e.g. parameter bounds. We choose 20 uniformly distributed hypercubes of side length 0.4, each containing 200 uniformly distributed training points. We train a global GP on all 4000 examples, as well as a separate local GP for each hypercube. The local GPs have the advantage of being able to learn different hyperparameters in each region while the global GP has the advantage of having access to all of the data. Fig.
5 shows the predictive performance (in log loss) on heldout data. We also show the distribution of fitted hyperparameters for both the local and global GPs.We see that the hyperparameters (especially signal variance) vary substantially across regions. Furthermore, the local GPs perform better than the global GP in every repeated trial. The global model has an average log loss of while the local model has a log loss of across 50 trials; the improvement is significant under a test at . This experiment confirms that we improve predictive power of the models and also reduce the computational overhead of the GP by using the local approach. The learned local noise variance in Fig. 5
is very bimodal, confirming the heteroscedasticity in the objective across regions. The global GP is required to learn the high noise value to avoid a penalty for outliers.
3.7 Why highdimensional spaces are challenging
In this section, we illustrate why the restarting and banditing strategy of TuRBO is so effective on our test problems. Each TR restart finds very far apart solutions of varying quality, which highlights the multimodal nature of the problem. This gives TuRBO a distinct advantage.
We ran TuRBO1 (with a single trust region) for 50 restarts on the 60D rover trajectory planning problem from §3.2 and logged the volume of the TR and its center after each iteration. Fig. 6 shows the volume of the TR, the arclength of the TR center’s trajectory, the final objective value, and the distance each final solution has to its nearest neighbor. The left two plots confirm that, within a trust region, the optimization is indeed highly local. The volume of any given trust region decreases very rapidly, and is only a small fraction of the total search space. From the right two plots, we see that the solutions found by TuRBO are far apart with varying quality, demonstrating the value of performing multiple local search runs in parallel.
3.8 The efficiency of large batches
Recall that combining multiple samples into single batches provides substantial speedups in terms of wallclock time but poses the risk of inefficiencies since sequential sampling has the advantage of leveraging more information. In this section, we investigate whether large batches are efficient for TuRBO. Note that HernándezLobato et al. (2017) and Kandasamy et al. (2018) have shown that the TS acquisition function is efficient for batch acquisition with a single global surrogate model. We study TuRBO1 on the robot pushing problem from §3.1 with batch sizes . The algorithm takes samples for each batch size and we average the results over 30 replications. Fig. 7 (left) shows the reward for each batch size with respect to the number of batches: we see that larger batch sizes obtain better results for the same number of iterations. Fig. 7 (right) shows the performance as a function of evaluations. We see that the speedup is essentially linear.
4 Conclusions
The global optimization of computationally expensive blackbox functions in highdimensional spaces is an important and timely topic (Frazier, 2018; Munteanu et al., 2019). We proposed the TuRBO algorithm which takes a novel local approach to global optimization. Instead of fitting a global surrogate model and trading off exploration and exploitation on the whole search space, TuRBO maintains a collection of local probabilistic models. These models provide local search trajectories that are able to quickly discover excellent objective values. This local approach is complemented with a global bandit strategy that allocates samples across these trust regions, implicitly trading off exploration and exploitation. A comprehensive experimental evaluation demonstrates that TuRBO outperforms the stateoftheart Bayesian optimization and operations research methods on a variety of realworld complex tasks.
In the future, we plan on extending TuRBO to learn local lowdimensional structure to improve the accuracy of the local Gaussian process model. This extension is particularly interesting in highdimensional optimization when derivative information is available (Constantine, 2015; Eriksson et al., 2018). This situation often arises in engineering, where objectives are often modeled by PDEs solved by adjoint methods, and in machine learning where gradients are available via automated differentiation. Ultimately, it is our hope that this work spurs interest in the merits of Bayesian local optimization, particularly in the highdimensional setting.
References
 Practical Bayesian optimization for model fitting with Bayesian adaptive direct search. In Advances in neural information processing systems, pp. 1836–1846. Cited by: §1.1.
 Local Bayesian optimization of motor skills. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 41–50. Cited by: §1.1.
 Heteroscedastic treed Bayesian optimisation. arXiv preprint arXiv:1410.7172. Cited by: §1.
 Bayesian optimization of combinatorial structures. In International Conference on Machine Learning, pp. 462–471. Cited by: §1.1.
 A warped kernel improving robustness in Bayesian optimization via random embeddings. In International Conference on Learning and Intelligent Optimization, pp. 281–286. Cited by: §1.1.
 On the choice of the lowdimensional domain for global optimization via random embeddings. Journal of Global Optimization. Cited by: §1.1.

Bayesian optimization for learning gaits under uncertainty.
Annals of Mathematics and Artificial Intelligence
76 (12), pp. 5–23. Cited by: §1.  Joint optimization and variable selection of highdimensional Gaussian processes. In Proceedings of the International Conference on Machine Learning, pp. 1379–1386. Cited by: §1.1.
 Fast computation of the multipoints expected improvement with applications in batch selection. In International Conference on Learning and Intelligent Optimization, pp. 59–69. Cited by: §1.1.
 Active subspaces: emerging ideas for dimension reduction in parameter studies. Vol. 2, SIAM. Cited by: §4.
 Scalable log determinants for Gaussian process kernel learning. In Advances in Neural Information Processing Systems, pp. 6327–6337. Cited by: Appendix C.
 Empirically comparing the finitetime performance of simulationoptimization algorithms. In Winter Simulation Conference, pp. 2206–2217. Cited by: §1.1.
 Scaling Gaussian process regression with derivatives. In Advances in Neural Information Processing Systems, pp. 6867–6877. Cited by: §4.
 A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811. Cited by: §1.1, §1, §1, §4.
 Discovering and exploiting additive structure for Bayesian optimization. In International Conference on Artificial Intelligence and Statistics, pp. 1311–1319. Cited by: §1.1.
 GPyTorch: Blackbox matrixmatrix Gaussian process inference with GPU acceleration. In Advances in Neural Information Processing Systems, pp. 7576–7586. Cited by: Appendix C.
 Active learning of linear embeddings for Gaussian processes. arXiv preprint arXiv:1310.6740. Cited by: §1.1.
 Batch Bayesian optimization via local penalization. In Artificial intelligence and statistics, pp. 648–657. Cited by: §1.1.

The CMA evolution strategy: A comparing review.
In
Towards a New Evolutionary Computation
, pp. 75–102. Cited by: Appendix B, §1.1, §1.  Parallel and distributed Thompson sampling for largescale accelerated exploration of chemical space. In Proceedings of the International Conference on Machine Learning, pp. 1470–1479. Cited by: Appendix E, §1.1, §1.1, §1, §3.8.
 Sequential modelbased optimization for general algorithm configuration. In International Conference on Learning and Intelligent Optimization, pp. 507–523. Cited by: §1.1.
 Evolutionary optimization in uncertain environmentsa survey. IEEE Transactions on Evolutionary Computation 9 (3), pp. 303–317. Cited by: §1.1.
 The nlopt nonlinearoptimization package, 2014. URL: http://abinitio.mit.edu/nlopt. Cited by: Appendix B.
 SciPy: Open source scientific tools for Python. Cited by: Appendix B.
 Parallelised Bayesian optimisation via Thompson sampling. In International Conference on Artificial Intelligence and Statistics, pp. 133–142. Cited by: Appendix E, §1.1, §3.8.
 High dimensional Bayesian optimisation and bandits via additive models. In International Conference on Machine Learning, pp. 295–304. Cited by: §1.1, §3.3.
 Global optimization with sparse and local Gaussian process models. In International Workshop on Machine Learning, Optimization and Big Data, pp. 185–196. Cited by: §1.1.
 Differentiating the multipoint expected improvement for optimal batch design. In International Workshop on Machine Learning, Optimization and Big Data, pp. 37–48. Cited by: §1.1.
 Comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21 (2), pp. 239–245. Cited by: Appendix A.
 Optimization, fast and slow: optimally switching between local and Bayesian optimization. arXiv preprint arXiv:1805.08610. Cited by: §1.1.
 A framework for Bayesian optimization in embedded subspaces. In Proceedings of the International Conference on Machine Learning, Note: Accepted for publication. The code is available at https://github.com/aminnayebi/HesBO. Cited by: §1.1, §3, §4.
 Efficient high dimensional Bayesian optimization with additivity and quadrature Fourier features. In Advances in Neural Information Processing Systems, pp. 9005–9016. Cited by: §1.1.
 A simplex method for function minimization. The Computer Journal 7 (4), pp. 308–313. Cited by: §1.1, §2.
 BOCK : Bayesian optimization with cylindrical kernels. In Proceedings of the 35th International Conference on Machine Learning, Vol. 80, pp. 3868–3877. Cited by: §1.1.

A direct search optimization method that models the objective and constraint functions by linear interpolation
. In Advances in Optimization and Numerical Analysis, pp. 51–67. Cited by: §2.  A view of algorithms for optimization without derivatives. Mathematics TodayBulletin of the Institute of Mathematics and its Applications 43 (5), pp. 170–174. Cited by: §1.1.

A stochastic radial basis function method for the global optimization of expensive functions
. INFORMS Journal on Computing 19 (4), pp. 497–509. Cited by: Appendix D.  Highdimensional Bayesian optimization via additive models with overlapping groups. In International Conference on Artificial Intelligence and Statistics, pp. 298–307. Cited by: §1.1.
 A tutorial on Thompson sampling. Foundations and Trends in Machine Learning 11 (1), pp. 1–96. Cited by: §1.1.
 Parallel predictive entropy search for batch global optimization of expensive objective functions. In Advances in Neural Information Processing Systems, pp. 3330–3338. Cited by: §1.1.
 Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE 104 (1), pp. 148–175. Cited by: §1.1, §1.
 Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, pp. 2951–2959. Cited by: §1.
 Scalable Bayesian optimization using deep neural networks. In International Conference on Machine Learning, pp. 2171–2180. Cited by: §1.1.
 Input warping for Bayesian optimization of nonstationary functions. In International Conference on Machine Learning, pp. 1674–1682. Cited by: §1.
 Bayesian optimization with robust Bayesian neural networks. In Advances in Neural Information Processing Systems, pp. 4134–4142. Cited by: §1.1.
 Bayesian guided pattern search for robust local optimization. Technometrics 51 (4), pp. 389–401. Cited by: §1.
 Cosmological constraints from the SDSS luminous red galaxies. Physical Review D 74 (12), pp. 123507. Cited by: §F.3.
 On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25 (3/4), pp. 285–294. Cited by: Appendix E, §1.1.
 Advancing Bayesian optimization: the mixedgloballocal (MGL) kernel and lengthscale cool down. arXiv preprint arXiv:1612.03117. Cited by: §1.1.
 Parallel Bayesian global optimization of expensive functions. arXiv preprint arXiv:1602.05149. Cited by: §1.1.
 Batched largescale Bayesian optimization in highdimensional spaces. In International Conference on Artificial Intelligence and Statistics, pp. 745–754. Cited by: §F.1, §F.2, §1.1, §1, §3.1, §3.2, §3.2.
 Bayesian optimization in a billion dimensions via random embeddings. Journal of Artificial Intelligence Research 55, pp. 361–387. Cited by: §1.1, §3.
 Gaussian processes for machine learning. Vol. 2, MIT Press. Cited by: §1.
 The parallel knowledge gradient method for batch Bayesian optimization. In Advances in Neural Information Processing Systems, pp. 3126–3134. Cited by: §1.1.
 Bayesian optimization with gradients. In Advances in Neural Information Processing Systems, pp. 5267–5278. Cited by: §1.1.
 A review of trust region algorithms for optimization. In International Council for Industrial and Applied Mathematics, Vol. 99, pp. 271–282. Cited by: §1.1, §2.
 Algorithm 778: LBFGSB: Fortran subroutines for largescale boundconstrained optimization. ACM Transactions on Mathematical Software (TOMS) 23 (4), pp. 550–560. Cited by: §1.1.
Supplementary material
In §A we provide additional benchmark results on synthetic problems. The algorithms considered in this paper are explained in more detail in §B. We describe how we leverage scalable GP regression in §C. We describe the hyperparameters of TuRBO in §D and given additional details on how we shrink and expand the trust regions. Thompson sampling is summarized in §E. Finally, we describe the test problems in §F and provide runtimes for all benchmark problems in §G.
Appendix A Synthetic experiments
We present results on four popular synthetic problems: Ackley with domain , Levy with domain , Rastrigin with domain , and the 6D Hartmann function with domain . The optimizers are given a budget of 50 batches of size which results in a total of function evaluations. All methods use 20 initial points from a symmetric Latin hypercube design (SLHD) [McKay et al., 1979] except for TuRBO
5, where we use 10 initial points in each local region. To compute confidence intervals on the results we use 30 independent repeated trials.
Fig. 8 shows good performance for TuRBO1 and TuRBO5 on all test problems. TuRBO1 and TuRBO5 outperform other methods on Ackley and consistently find solutions close to the global optimum. The results for Levy also show that TuRBO5 clearly performs the best. However, TuRBO1 found solutions close to the global optimum in some trials but struggled in others, which shows that a good starting position is important. On Rastrigin, BOBYQA and BFGS perform comparably to TuRBO1. Note that Rastrigin is challenging and most random initial positions give large function values. Because TuRBO5 samples evenly from the local regions, it makes relatively slow progress. In contrast, the 6D Hartmann function is much easier and most methods converge quickly. Interestingly, BOHAMIANN struggles compared to other BO methods, suggesting that its model fit is inaccurate compared to GPbased methods.
Appendix B Algorithms background
In this section, we provide additional background on the three categories of competing optimization methods: traditional local optimizers, evolutionary algorithms, and other recent works in largescale BO. Namely, we compare TuRBO to NelderMead (NM), BOBYQA, BFGS, EBO, Bayesian optimization with cylindrical kernels (BOCK), HeSBOTS, BOHAMIANN, Thompson sampling with a global GP (GPTS), CMAES, and random search (RS). This is an extensive set of stateoftheart optimization algorithms from both local and global optimization.
For local optimization, we use the popular NM, BOBYQA, and BFGS methods with multiple restarts. They are all initialized from the best of a few initial points. We use the Scipy [Jones et al., 2014] implementations of NM and BFGS and the nlopt [Johnson, 2014] implementation of BOBYQA.
Evolutionary algorithms often perform well for blackbox optimization with a large number of function evaluations. These methods are appropriate for large batch sizes since they evaluate a population in parallel. We compare to CMAES [Hansen, 2006]
as it outperforms differential evolution, genetic algorithms, and particle swarms in most of our experiments. We use the
pycma^{4}^{4}4https://github.com/CMAES/pycma implementation with the default settings and a population size equal to the batch size. The population is initialized from the best of a few initial points.To the best of our knowledge, EBO is the only BO algorithm that has been applied to problems with large batch sizes and tens of thousands of evaluations. We also compare to GPTS, BOCK, HeSBOTS, and BOHAMIANN, all using Thompson sampling as the acquisition function. The original implementations of BOCK and BOHAMIANN often take hours to suggest a single point and do not support batch suggestions. This necessitated changes to use them for our highdimensional setting with large batch sizes. To generate a discretized candidate set, we generate a set of shuffled Sobolev sequences with points for each batch.
Appendix C Gaussian process regression
We further provide details on both the computational scaling and modeling setup for the GP. To address computational issues, we use GPyTorch [Gardner et al., 2018] for scalable GP regression. GPyTorch follows Dong et al. [2017a] to solve linear systems using the conjugate gradient (CG) method and approximates the logdeterminant via the Lanczos process. Without GPyTorch, running BO with a GP model for more than a few thousand evaluations would be infeasible as classical approaches to GP regression scale cubically in the number of data points.
On the modeling side, the GP is parameterized using a Matérn5/2 kernel and a constant mean function for all experiments. The GP hyperparameters are refit before proposing a new batch by optimizing the logmarginal likelihood. For each refit, we initialize the optimizer from the best of a few randomly sampled hyperparameters in logspace. The domain is rescaled to and the function values are standardized before fitting the GP. We use an isotropic Matérn kernel, as ARD seems to deteriorate the performance, especially for the rover trajectory planning ().
Appendix D TuRBO details
In all experiments, we use the following hyperparameters for TuRBO1: , , , , and , where is the number of dimensions and is the batch size. This is similar to the settings Regis and Shoemaker [2007] use for their deterministic local optimization method. Note that this assumes the domain has been scaled to the unit hypercube . When using TuRBO1, we consider an improvement from at least one evaluation in the batch a success Regis and Shoemaker [2007]. In this case, we increment the success counter and reset the failure counter to zero. If no point in the batch improves the current best solution we set the success counter to zero and increment the failure counter.
When using TuRBO with more than one TR, we use the same tolerances as in the sequential case () as the number of evaluations allocated by each TR may differ in each batch. We use separate success and failure counters for each TR. We consider a batch a success for if points are selected from this TR and at least one is better than the best solution in this TR. The counters for this TR are updated just as for TuRBO1 in this case. If all evaluations are worse than the current best solution we consider this a failure and set the success counter to zero and add to the failure counter. The failure counter is set to if we increment past this tolerance, which will trigger a halving of its side length.
For each TR, we initialize and terminate the TR when . Each TR in TuRBO uses a candidate set of size
on which we generate each Thompson sample. A new candidate set is generated for each batch and each set is generated from adding perturbations from a truncated normal distribution to
. In particular, the perturbation added to each dimension and each candidate is drawn from an distribution truncated to . The truncation guarantees that all candidate points are within the TR. Candidate points that end up outside the domain are projected onto the boundary. We experimented with using a spherical TR and generating candidates uniformly within the sphere, but the truncated normal distribution worked slightly better in our experiments.Appendix E Thompson sampling
In this section, we provide details and pseudocode that makes the background on Thompson sampling (TS) with GPs precise. Conceptually, TS [Thompson, 1933] for BO works by drawing a function from the surrogate model (GP) posterior. It then makes a suggestion by reporting the optimum of the function . This process is repeated independently for multiple suggestions (). The explorationexploitation trade off is naturally handled by the stochasticity in sampling.
Furthermore, parallel batching is naturally handled by the marginalization coherence of GPs. Many acquisition functions handle batching by imputing function evaluations for the other suggested (but unobserved) points via sampling from the posterior. Independent TS
for parallel batches is exactly equivalent to conditioning on imputed values for unobserved suggestions. This means
TS also trivially handles asynchronous batch sampling [HernándezLobato et al., 2017, Kandasamy et al., 2018].Note that we cannot sample an entire function from the GP posterior in practice. We therefore work in a discretized setting by first drawing a finite candidate set; this puts us in the same setting as the traditional multiarm bandit literature. To do so, we sample the GP marginal on the candidate set, and then apply regular Thompson sampling.
Appendix F Test problems
In this section we provide some brief additional details for the test problems. We refer the reader to the original papers for more details.
f.1 Robot pushing
The robot pushing problem was first considered in Wang et al. [2018]. The goal is to tune a controller for two robot hands to push two objects to given target locations. The robot controller has parameters that specify the location and rotation of the hands, pushing speed, moving direction, and pushing time. The reward function is , where are the initial positions of the objects, are the final positions of the objects, and are the goal locations.
f.2 Rover trajectory planning
This problem was also considered in Wang et al. [2018]. The goal is to optimize the trajectory of a rover over rough terrain, where the trajectory is determined by fitting a Bspline to 30 points in a 2D plane. The reward function is , where penalizes any collision with an object along the trajectory by . Here, and are the desired start and end positions of the trajectory. The cost function hence adds a penalty when the start and end positions of the trajectory are far from the desired locations.
f.3 Cosmological constant learning
The cosmological constant experiment uses luminous red galaxy data from the Sloan Digital Sky Survey Tegmark et al. [2006]. The objective function is a likelihood estimate of a simulation based astrophysics model of the observed data. The parameters include various physical constants, such as Hubble’s constant, the densities of baryonic and other forms of matter. We use the nine parameters tuned in previous papers, plus three additional parameters chosen from the many available to the simulator.
f.4 Lunar lander reinforcement learning
The lunar lander problem is taken from the OpenAI gym^{5}^{5}5gym.openai.com/envs/LunarLanderv2/
. The objective is to learn a controller for a lunar lander that minimizes fuel consumption and distance to a landing target, while also preventing crashes. At any time, the state of the lunar lander is its angle and position, and their respective time derivatives. This 8dimensional state vector
is passed to a handcrafted parameterized controller that determines which of 4 actions to take. Each corresponds to firing a booster engine: . The handcrafted control policy has parameters that parameterize linear score functions of the state vector and also the thresholds that determine which action to prioritize. The objective is the average final reward over a fixed constant set of 50 randomly generated terrains, initial positions, and initial velocities. Simulation runs were capped at time steps, after which failure to land was scored as a crash.Appendix G Runtimes
In Table 1, we provide the algorithmic runtime for the numerical experiments. This is the total runtime for one optimization run, excluding the time spent evaluating the objective function. We see that the local optimizers and the evolutionary methods run with little to no overhead on all problems. The BO methods with a global GP model become very expensive when the number of evaluations increases and we leverage scalable GPs on an NVIDIA RTX 2080 TI. TuRBO does not only outperform the other BO methods, but runs in minutes on all test problems and is in fact more than faster than the slowest BO method.
Synthetic  Lunar landing  Cosmological constant  Robot pushing  Rover trajectory  Ackley200  

Evaluations  
Dimensions  or  
TuRBO  
EBO  NA  
GPTS  
BOCK  
BOHAMIANN  
NM  
CMAES  
BOBYQA  
BFGS  
RS 
Comments
There are no comments yet.