pySOT and POAP: An event-driven asynchronous framework for surrogate optimization

by   David Eriksson, et al.

This paper describes Plumbing for Optimization with Asynchronous Parallelism (POAP) and the Python Surrogate Optimization Toolbox (pySOT). POAP is an event-driven framework for building and combining asynchronous optimization strategies, designed for global optimization of expensive functions where concurrent function evaluations are useful. POAP consists of three components: a worker pool capable of function evaluations, strategies to propose evaluations or other actions, and a controller that mediates the interaction between the workers and strategies. pySOT is a collection of synchronous and asynchronous surrogate optimization strategies, implemented in the POAP framework. We support the stochastic RBF method by Regis and Shoemaker along with various extensions of this method, and a general surrogate optimization strategy that covers most Bayesian optimization methods. We have implemented many different surrogate models, experimental designs, acquisition functions, and a large set of test problems. We make an extensive comparison between synchronous and asynchronous parallelism and find that the advantage of asynchronous computation increases as the variance of the evaluation time or number of processors increases. We observe a close to linear speed-up with 4, 8, and 16 processors in both the synchronous and asynchronous setting.



page 5

page 7

page 8

page 10

page 13

page 16

page 18

page 19


Model-based Asynchronous Hyperparameter Optimization

We introduce a model-based asynchronous multi-fidelity hyperparameter op...

Asynchronous Batch Bayesian Optimisation with Improved Local Penalisation

Batch Bayesian optimisation (BO) has been successfully applied to hyperp...

PARyOpt: A software for Parallel Asynchronous Remote Bayesian Optimization

PARyOpt is a python based implementation of the Bayesian optimization ro...

Asynchronous Parallel Bayesian Optimisation via Thompson Sampling

We design and analyse variations of the classical Thompson sampling (TS)...

Physics-inspired Ising Computing with Ring Oscillator Activated p-bits

The nearing end of Moore's Law has been driving the development of domai...

Advances in Asynchronous Parallel and Distributed Optimization

Motivated by large-scale optimization problems arising in the context of...

Evaluating Abstract Asynchronous Schwarz solvers on GPUs

With the commencement of the exascale computing era, we realize that the...

Code Repositories


Surrogate Optimization Toolbox for Python

view repo


Plumbing for Optimization with Asynchronous Parallelism

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Problem Statement

We consider the global optimization problem


where is a computationally expensive black-box function. We assume in addition that is a compact hypercube and that is a continuous function over the continuous variables. In our setting, is non-linear, and has multiple local minima, and the gradient of

is not available. Computationally expensive refers to any problem where a single function evaluation takes anywhere between a few minutes and many hours. Common examples include running an expensive simulation model of a complex physical process and tuning machine learning models

Snoek et al. (2012). It is common to have limited time and evaluation budgets due to the significant amount of time necessary for each function evaluation, making it challenging to find a good solution to (1) in the case when is multimodal.

1.2 Survey of Methods

Many popular algorithms for black-box optimization are not suitable when the function evaluations are computationally expensive. Derivative based methods are appealing in cases when gradient information can be obtained cheaply, in which case it is possible to run a local optimizer with a multi-start strategy such as Newton’s method or BFGS Avriel (2003). Finite differences can be used when gradient information is unavailable, but it is very computationally expensive since

is expensive, and impreciseness in the simulation model often leads to inaccurate estimates. Several popular derivative free optimization (DFO) methods exist for local optimization such as pattern search

Hooke and Jeeves (1961), Nelder-Mead Nelder and Mead (1965), and ORBIT Wild and Shoemaker (2011)

, but these methods are not good choices for multimodal problems. Global heuristic optimization methods such as genetic algorithms

Goldberg (2006)

, particle swarm optimization

Kennedy (2010), and differential evolution Storn and Price (1997), generally require a large number of function evaluations and are not practical for computationally expensive objective functions.

A successful family of optimization algorithms for computationally expensive optimization are methods based on surrogate models. The surrogate model approximates the objective function and helps accelerate convergence to a good solution. Popular choices are methods based on radial basis functions (RBFs) such as

Regis and Shoemaker (2007, 2013); Gutmann (2001) and Kriging and Gaussian process (GPs) based methods such as Jones (2001); Jones et al. (1998); Frazier et al. (2008)

. Other possible surrogate models are polynomial regression models and multivariate adaptive regression splines

Friedman (1991); Müller and Shoemaker (2014). Most surrogate optimization algorithms start by evaluating an experimental design that is used to fit the initial surrogate model. What follows is an adaptive phase where an auxiliary problem is solved to pick the next sample point(s), and this phase continues until either a restart or a stopping criterion has been met. We can avoid getting trapped in a local minimum by using an auxiliary problem that provides a good balance of exploration and exploitation.

Several parallel algorithms have been developed for computationally expensive black-box optimization. Regis and Shoemaker Regis and Shoemaker (2009) developed a synchronous parallel surrogate optimization algorithm based on RBFs and this idea was later extended to SOP algorithm for large number of processors Krityakierne et al. (2016). In both algorithms, it is assumed that (i) the resources are homogeneous and (ii) the evaluation time is constant. The first assumption does not hold for heterogeneous parallel computing platforms and the second assumption is unlikely to hold in cases where the complexity of evaluating the objective depends spatially on the input. The first assumption can almost always be assessed before the start of the optimization run while the second assumption may not be easy to assess in practice.

Another limitation of the work in Regis and Shoemaker (2009) is that the algorithm does not handle the possibility of worker failures and crashed evaluations. Being able to handle failures is critical in order to run the algorithm on large-scale systems. The natural way of dealing with cases where (i) or (ii) are violated is to launch function evaluations asynchronously, which is illustrated in Figure 1 to eliminate idle time.

Figure 1: Synchronous vs asynchronous parallel.

1.3 Survey of Software

A library with similar functionality to POAP is SCOOP Hold-Geoffroy et al. (2014), a Python based library for distributing concurrent tasks while internally handling the communication. POAP provides similar functionality for global optimization problems and also handles all of the communication internally, which makes it easy to implement asynchronous optimization algorithms.

HOPSPACK (Hybrid Optimization Parallel Search PACKage) Plantenga (2009) is a C++ framework for derivative-free optimization problems. HOPSPACK supports parallelism through MPI or multi-threading and supports running multiple optimization solvers simultaneously, a functionality similar to combining strategies in POAP. The framework implements an asynchronous pattern search solver and supports non-linear constraints and mixed-integer variables, but there is no support for surrogate optimization.

MATSuMoTo (Matlab Surrogate Model Toolbox) Mueller (2014) is an example of a surrogate global optimization toolbox. MATSuMoTo is written in Matlab and has support for computationally expensive, black-box global optimization problems that may have continuous, mixed-integer, or pure integer variables. MATSuMoTo offers a variety of choices for surrogate models and surrogate model mixtures, experimental designs, and auxiliary functions. The framework is not designed to support a large class of surrogate optimization algorithms and the lack of object orientation makes it hard to extend the framework. Parallelism is only supported through Matlab’s Parallel Computing Toolbox and there is no support for asynchrony, combining strategies, or dynamically changing the number of workers. Furthermore, many large-scale systems do not support Matlab. Note that as of version 2018b, the Matlab Global Optimization Toolbox offers surrogateopt 222, which is an asynchronous surrogate optimization method implementation based on Regis and Shoemaker (2007).

Nonlinear Optimization by Mesh Adaptive Direct Search (NOMAD) Le Digabel (2011) is a library intended for time-consuming black-box simulation with a small number of variables. The library implements mesh adaptive direct search (MADS) and there is support for asynchronous function evaluations using MPI. The framework is fault resilient in the sense that it supports objective function failing to return a valid output. Similar fault resilience is provided by POAP, which allows the user to decide what action to take in case of a failure.

Dakota Eldred et al. (2007) is an extensive toolkit with algorithms for optimization with and without gradient information; uncertainty quantification, nonlinear least squares methods, and sensitivity/variance analysis. These components can be used on their own or with strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. The Dakota toolkit is object-oriented and written in C++ with the intention of being a flexible and extensible interface between simulation codes, and there is support for parallel function evaluations. Dakota includes C++ code for global optimization with a GP based surrogate (e.g, an implementation of the GP-EI/EGO method Jones et al. (1998) and EGRA method Bichon et al. (2008)). Dakota does not have global optimization codes designed to be used with RBF surrogates, although it is possible to construct an RBF surrogate in Dakota.

BayesOpt Martinez-Cantin (2014) is a library with Bayesian optimization methods to solve nonlinear optimization problems. Bayesian optimization methods build a posterior distribution to capture the evidence and prior knowledge of the target function. Built in C++, the library is efficient, portable, and flexible. There is support for commonly used methods such as sequential Kriging optimization (SKO), sequential model-based optimization (SMBO), and efficient global optimization (EGO). The software is sequential and there is no support for parallel function evaluations.

RBFOpt Costa and Nannicini (2014) is a radial basis function based library that implements and extends the global optimization RBF algorithm proposed by Gutmann Gutmann (2001). RBFOpt is written in Python and supports asynchronous parallelism through Python’s multiprocessing library, but there is no support for MPI. The software is not designed to cover a large class of surrogate optimization methods and there is no support for dynamically changing the number of workers and combining different optimization strategies.

Cornell-MOE is a Python library that implements Bayesian optimization with the expected improvement and knowledge gradient acquisition functions. The software is built on work that extends these acquisition functions to batch synchronous parallel, both with and without gradient information Wu and Frazier (2016); Wu et al. (2017). There is no support for asynchronous parallelism and it is not possible to dynamically change the number of workers.

1.4 Contribution

The POAP and pySOT software has become very popular with pySOT having been downloaded more than 88,000 times and POAP downloaded more than 126,000 times. The main contribution of POAP is an event-driven framework for building and combining asynchronous optimization strategies. The user can implement their own strategies that specify what actions to take when different events occur, while all communication and dispatching of work is handled by the framework. POAP is designed to be both flexible and easily extensible, and the framework makes it easy to dynamically change the number of workers and combine different optimization strategies. POAP is fault resilient and handles function evaluation and worker crashes.

pySOT is a great test-suite for doing head-to-head comparisons with different experimental designs, surrogate models, and acquisition functions. Being built on top of POAP, pySOT leverages the many benefits of the POAP framework, leading to a robust and flexible framework without having to worry about the communication and dispatching of work. The object-oriented design makes pySOT easy to extend and users can experiment with different surrogate models, experimental designs, and auxiliary problems, and make comparisons in either a synchronous or an asynchronous setting. In addition, pySOT supports checkpointing which allows users to resume a crashed optimization run.

We provide an extensive comparison of synchrony and asynchrony in cases where the objective function evaluation time varies and conclude that reducing idle time is more important than information for multimodal problems. We conclude that asynchrony should be preferred over synchrony in this case. The performance difference between asynchrony and synchrony increases with function evaluation variance and number of processors since both increase the idle time for synchrony. Our numerical experiments also indicate that parallelism improves exploration and that the parallel algorithms often outperform the serial version with respect to number of function evaluations.

1.5 Overview

We review the general surrogate optimization algorithm and the most common surrogate models, experimental designs, and auxiliary problems in §2. We describe in detail our asynchronous surrogate optimization algorithm in §3. The implementation details of POAP and pySOT are described in §4 and §5 respectively. We illustrate a code example in §6 that shows how to use pySOT and POAP. We provide an extensive comparison between asynchrony and synchrony in §7 and conclude in §8.

2 Surrogate optimization

Most surrogate optimization methods follow the same main steps, both when running in serial and synchronous parallel. The first step consists of generating an experimental design that is evaluated to fit an initial surrogate model. Once an initial surrogate model has been built, we proceed to optimize an acquisition function to find new point(s) to evaluate. We often refer to optimizing this acquisition function as solving an auxiliary problem. We evaluate the new point(s), update the surrogate model, and repeat this procedure until a stopping criterion has been met. This is illustrated in Algorithm 1.

1:Generate an experimental design
2:Evaluate at the points in the experimental design
3:Build a surrogate model of from the data
4:while Stopping criterion not met do
5:     Solve an auxiliary problem to select the next point(s) to evaluate
6:     Evaluate the new point(s)
7:     Update the surrogate model
8:end while
Algorithm 1 Synchronous surrogate optimization algorithm

The overhead of fitting the surrogate model and optimizing the acquisition function should be compared to the evaluation time of the objective function, as we want to spend most of the computational time on function evaluations. We proceed with a brief background to the most popular experimental designs, surrogate models, and auxiliary problems.

2.1 Experimental design

The simplest experimental design is choosing the corners of the hypercube , often referred to as the 2-factorial design, but this is infeasible when is large and the function is expensive. Two common alternatives are the Latin hypercube design (LHD) and the symmetric Latin hypercube design (SLHD), which allow an arbitrary number of design points. We deal with integer variables by rounding the generated design and generate a new experimental design if the resulting design is rank-deficient or if any two points coincide. This works well in practice, which is also reported in Costa and Nannicini (2014) and Müller et al. (2013).

2.2 Surrogate models

The surrogate model is used to approximate the objective function. The surrogate model of choice in pySOT

is radial basis functions (RBFs), but we also support Gaussian processes (GPs), support vector regression (SVR), multivariate adaptive regression splines (MARS), and polynomial regression.

2.2.1 Radial basis functions

RBF interpolation is one of the most popular approaches for approximating scattered data in a general number of dimensions

Buhmann (2003); Fasshauer (2007); Schaback and Wendland (2006); Wendland (2004). Given a set of pairwise distinct interpolation points the RBF model takes the form


where the kernel is a one-dimensional function and , the space of polynomials with variables of degree no more than . The name RBF comes from the fact that the function is constant on spheres in . Common choices of kernels in surrogate optimization are the linear kernel , the cubic kernel , and the thin-plate spline . The coefficients are determined by imposing the interpolation conditions for and the discrete orthogonality condition


If we let be a basis for the -dimensional linear space , so we can write , the interpolation conditions lead to the following linear system of equations


where , , and . The solution to the linear system of equations is unique as long as and is at least the order of the kernel . The cubic and thin-plate spline kernels are both of order , so a polynomial tail of at least degree 1 is necessary, which is what we use.

A direct solver of the RBF system requires computing a dense LU factorization at a cost of flops. We can utilize the fact that we are adding a few points at a time, which allows incremental updates of an initial factorization in quadratic time. We first evaluate points such that , which makes it possible to compute an initial LU factorization with pivoting

where we have reordered the blocks to make it more natural to add new points to the system. After adding the new points we want to find an LU factorization of the extended system

where , , and . The fact that the trailing Schur complement is positive semi-definite allows us to look for a factorization of the form

We need to solve the two triangular systems and followed by computing a Cholesky factorization of . This allows us to update the factorization in flops, which is better than computing a new LU factorization in flops.

In practice, we add regularization to the system by using the kernel , for some regularization parameter , which ensures that the trailing Schur complement is positive definite and that the system is well-conditioned.

2.2.2 Gaussian processes

A Gaussian process (GP) is stochastic process where any finite number of random variables have a joint Gaussian distribution; see, e.g. 

Rasmussen and Williams (2006). This defines a distribution over functions , where is the mean function and is the covariance kernel. The GP model allows predicting the value and variance at any point, so it gives us an idea about the uncertainty of the prediction. The most popular kernel is the squared exponential kernel , and other possibilities include the Matérn kernels. For any points , where denotes the vector values for evaluated at each of the , and . We assume we observe function values , where each entry is contaminated by independent Gaussian noise with constant variance . Under a Gaussian process prior depending on the hyper-parameters , the log marginal likelihood is given by


where and ( for a deterministic ). Optimization of (5) is expensive, since direction computation of involves computing a Cholesky factorization of . The iteration cost of quickly becomes significantly more expensive than using an RBF interpolant, even though both methods are based on kernel interpolation, and the dependency of the hyper-parameters stops us from updating a factorization when adding new points. Promising work on scalable approximate Gaussian process regression can decrease the kernel learning to Wilson and Nickisch (2015); Dong et al. (2017), but these ideas only work in low-dimensional spaces. The computational cost for computing the surrogate model should be compared to the computational cost of function evaluations as we want to spend most of the computational effort on doing function evaluations.

2.2.3 Other choices

RBFs and GPs are by far the two most popular surrogate models in computationally expensive optimization. We briefly mention some other possible surrogate models available in pySOT, even though they are not as frequently used. Multivariate adaptive regression splines (MARS) Friedman (1991), are also weighted sums of basis functions , where each basis function is either constant and equal to 1, a hinge function of either the form or for some constant , or a product of hinge functions. It is also possible to use polynomial regression or support vector regression (SVR). Multiple surrogate models can be combined into an ensemble surrogate and Dempster-Shafer theory can be used to decide how to weigh the different models Müller and Piché (2011). This is useful in situations where it is hard to know what surrogate model to choose for a specific problem. Müller and Shoemaker (2014) indicated regression polynomial surrogate did not perform well by themselves on test problems, but they were sometimes helpful in combination with RBF surrogates.

2.3 Auxiliary problem

Evaluation of is expensive, so we optimize an acquisition function involving the surrogate model and previously evaluated points to find the next point(s) to evaluate. We refer to the optimization of as an auxiliary problem. This auxiliary problem must balance exploration and exploitation, where exploration emphasizes evaluating points far from previous evaluations to improve the surrogate model and escape local minima, while exploitation aims to improve promising solutions to make sure we make progress. The subsections below describe methods in pySOT to solve the auxiliary problem.

2.3.1 Candidate points

An acquisition function based on the weighted-distance merit function is introduced in Regis and Shoemaker (2007) to balance exploration and exploitation. The main idea is to generate a set of candidate points and use the merit function to pick the most promising candidate points. Exploration is achieved by giving preference to candidate points far from previous evaluations. More specifically, for each we let be the distance from to the point closest to that is currently being or has been evaluated. By defining and a good measure of exploration is a small value of , where for all . Exploitation is achieved through the surrogate model , where a small value of the quantity provides a measure of exploitation, where and .

The best candidate point is the minimizer of the acquisition function, for a given . This shows that serves as a balance between exploitation and exploration. A weight close to 0 emphasizes exploration while a weight close to 1 emphasizes exploitation. Algorithm 2 shows how to select the most promising candidate point. (see next page)

1:Compute and
2:for each  do
4:end for
5:for each  do
7:end for
8:Compute and
9:for each  do
11:end forreturn
Algorithm 2 Candidate point selection

The LMS-RBF method Regis and Shoemaker (2007) is useful for low-dimensional optimization problems. Given a sampling radius , the candidate points are generated as perturbations along each coordinate direction from the best solution Regis and Shoemaker (2007). Large values of the sampling radius will generate candidate points far away from the best solution while smaller values of the sampling radius will generate candidate points that are close to the best solution. We defer a description of how the sampling radius is updated to the next section. If is smaller than for an integer variable, is used to ensure that this variable is also perturbed Müller et al. (2013).

The DYCORS method Regis and Shoemaker (2013) was developed for high-dimensional problems and the idea is to start by perturbing most coordinates and perturb fewer dimensions towards the end of the optimization run Regis and Shoemaker (2013)

. This is achieved by assigning a probability to perturb each dimension. If

points are used in the experimental design and the evaluation budget is given by , each coordinate is perturbed with probability for . The probability function used in pySOT is the one introduced in Regis and Shoemaker (2013), which is .

In pySOT it is also possible to choose candidate points uniformly from and use the merit function to pick the most promising points, which contrasts to the previous two methods by not making local perturbations around the current best solution. This helps diversifying the set of evaluated points but resulting in Regis and Shoemaker (2007) for this approach in GMSRBF were not promising.

2.3.2 Acquisition functions in Bayesian optimization

Gaussian processes allow us to use acquisition functions that takes the prediction variance into account. A popular choice is the probability of improvement, which takes the form


where is a trade-off parameter that balances exploration and exploitation. With , probability of improvement does pure exploitation. A common choice is to start with large and lower towards the end of the optimization run Brochu et al. (2010).

Expected improvement is likely the most widely used acquisition function in Bayesian optimization, where the main idea is choosing the point that gives us the largest expected improvement. Moc̆kus defined improvement as the function


which can be evaluated analytically under a Gaussian process posterior and Jones et al. (1998) shows that


where . We can in a similar fashion add a trade-off parameter in which case the expected improvement takes the form


where . Another option that has been proposed is the lower confidence bound (LCB)


where is left to the user.

2.3.3 Other choices

Selecting the point that minimizes the bumpiness of a radial basis function, a concept introduced by Gutmann (2001), is supported in other softwares such as RBFOpt Costa and Nannicini (2014). The knowledge gradient acquisition function introduced used by Frazier et al. (2008) is implemented in Cornell-MOE 333

3 The asynchronous algorithm

A surrogate optimization in the flavor of Algorithm 1 is easy to implement, but may be inefficient if the evaluation time is not constant. This can be because evaluating the simulation model requires larger computational efforts for some input values (e.g., evaluating a PDE-based objective function may require more time steps for some values of the decision variable ). Computation time can also vary because of variation in the computational resources. Dealing with potential function evaluation crashes is less obvious in a synchronous framework, where we may either try to re-evaluate or exclude the points from the batch. Finally, dynamically changing the number of resources is much more straightforward in an asynchronous framework, which we describe next.

Just as in the synchronous parallel case we start by evaluating an experimental design. These points can be evaluated asynchronously, but the fact that we want to evaluate all design points before proceeding to the adaptive phase introduces an undesirable barrier. This becomes an issue if there are straggling workers or if some points take a long time to evaluate. The most natural solution is to generate an experimental design that makes it possible to let workers proceed to the adaptive phase once all points are either completed or pending. To be more precise, assume that we have workers and that points are needed to construct a surrogate model. We can then generate an experimental design with points, which will allow workers to proceed to the adaptive phase once there are no outstanding evaluations in the experimental design. The adaptive phase differs from Algorithm 1 in the sense that we propose a new evaluation as soon as a worker becomes available.

We use an event-driven framework where the master drives the event loop, updates the surrogate, solves the auxiliary problem, etc., and we have workers available to do function evaluations. The workload of the master is significantly less than of the workers, so we can use the same number of workers as we have available resources. The asynchronous algorithm is illustrated in Algorithm 3.

1:Inputs: Initial design points, workers
3:Queue initial design
5:for each worker do Launch initial evaluations
6:     Pop point from queue and dispatch to worker
7:end for
9:while stopping criterion not met (e.g., do Event loop
10:     if evaluation completed then
11:         Update and
12:     else if evaluation failed and retry desired then
13:         Add back to evaluation queue
14:     end if
16:     if worker ready then
17:         if evaluation queue empty then
18:              Build surrogate model
19:              Solve auxiliary problem and add point to the queue
20:         end if
21:         Pop point from queue and dispatch to worker
22:     end if
23:end while
24:return ,
Algorithm 3 Asynchronous surrogate optimization algorithm

3.1 Updating the sampling radius in Stochastic SRBF

We now elaborate on how to pick the value of the sampling radius that is used to generate the candidate points used in the LMS-RBF and DYCORS methods. We follow the idea in Regis and Shoemaker (2007) where counters and are used to track the number of consecutive evaluations with and without significant improvement. If there are too many failures in serial, the algorithm restarts. This idea is extended to synchronous parallel in Regis and Shoemaker (2009) by processing a batch at a time. If reaches a tolerance the sampling radius is doubled and is set to 0. Similarly, if reaches the sampling radius is halved and is set to 0.

In the asynchronous setting, we update the counters after each completed function evaluation. We do not update the counters for evaluations that were launched before the last time the sampling radius was changed. The reason for this is that these evaluations are based on outdated information. The logic for updating the sampling radius and the best solution can be seen in Algorithm 4.

1:Inputs: , , , , , , , ,
2:if  then
5:     if  then
8:     end if
12:end if
14:if  or  then
17:     if  then
19:     else
21:     end if
22:end if
24:return , , ,
Algorithm 4 Sampling radius adjustment routine

We also follow the recommendations in Regis and Shoemaker (2007) and Regis and Shoemaker (2009) to restart the algorithm when we reach a maximum failure tolerance parameter or when the sampling radius drops below . Restarting has shown to be successful for LMS-RBF and DYCORS as it can be hard to make progress when the surrogate is very biased towards the current best solution and we may be stuck in a local minimum that is hard to escape. Restarting the algorithm can help avoid this issue. We do not terminate pending evaluations after a restart occurs, but they are not incorporated in the surrogate model or used to adjust the sampling radius when they finish.

4 POAP implementation

This section describes the Plumbing for Optimization with Asynchronous Parallelism444POAP can be downloaded from the GitHub repository: (POAP) framework. POAP has three main components: a controller that asks workers to run function evaluations, a strategy that proposes new actions, and a set of workers that carry out function evaluations.

4.1 Controllers

The controller is responsible for accepting or rejecting proposals by the strategy object, controlling and monitoring the workers, and informing the strategy object of relevant events. Examples of relevant events are the processing of a proposal, or status updates on a function evaluation. Interactions between controller and the strategies are organized around proposals and evaluation records. At the beginning of the optimization and on any later change to the system state, the controller requests a proposal from the strategy. The proposal consists of an action (evaluate a function, kill a function, or terminate the optimization), a list of parameters, and a list of callback functions to be executed once the proposal is processed. The controller then either accepts the proposal (and sends a command to the worker), or rejects the proposal.

When the controller accepts a proposal to start a function evaluation, it creates an evaluation record to share information about the status of the evaluation with the strategy. The evaluation record includes the evaluation point, the status of the evaluation, the value (if completed), and a list of callback functions to be executed on any update. Once a proposal has been accepted or rejected, the controller processes any pending system events (e.g. completed or canceled function evaluations), notifies the strategy about updates, and requests the next proposed action.

POAP comes with a serial controller for when objective function evaluations are carried out in serial. There is also a threaded controller that dispatches work to a set of workers where each worker is able to handle evaluation and kill requests. The requests are asynchronous in the sense that the workers are not required to complete the evaluation or termination requests. The worker is forced to respond to evaluation requests, but may ignore kill requests. When receiving an evaluation request, the worker should either attempt the evaluation or mark the record as killed. The worker sends status updates back to the controller by updating the relevant record. There is also an extension of the threaded controller that works with MPI and a controller that uses simulated time. The latter is useful for testing asynchronous optimization strategies for different evaluation time distributions.

4.2 Strategies

The strategy is responsible for choosing new evaluations, killing evaluations, and terminating the optimization run when a stopping criteria is reached. POAP provides some basic default strategies based on non-adaptive sampling and serial optimization routines and also some strategies that adapt or combine other strategies.

Different strategies can be composed by combining their control actions, which can be used to let a strategy cycle through a list of optimization strategies and select the most promising of their proposals. Strategies can also subscribe to be informed of all new function evaluations so they incorporate any new function information, even though the evaluation was proposed by another strategy. This makes it possible to start several independent strategies while still allowing each strategy to look at the function information that comes from function evaluations proposed by other strategies. As an example, we can have a local optimizer strategy running a gradient based method where the starting point can be selected based on the best point found by any other strategy. The flexibility of the POAP framework makes combined strategies like these straightforward.

4.3 Workers

The multi-threaded controller employs a set of workers that are capable of managing concurrent function evaluations. Each worker does not provide parallelism on its own, but the worker itself is allowed to exploit parallelism by separate external processes.

The basic worker class can call Python objective functions, which only results in parallelism if the objective function itself allows parallelism. There is also a worker class that uses subprocesses to evaluate objective functions that are not necessarily in Python. The user is responsible for specifying how to evaluate the objective function and parse partial information.

The number of workers can be adjusted dynamically during the optimization process, which is particularly useful in a cloud setting. POAP supports running both on the Google Cloud platform (GCP) and the Amazon Web Services (AWS). We support workers connecting to a specified TCP/IP port to communicate with the controller, making it easy to add external resources.

5 pySOT implementation

The surrogate optimization toolbox555pySOT can be downloaded from the Github repository: (pySOT) is a collection of surrogate optimization strategies that can be used with the POAP framework. pySOT follows the general surrogate optimization framework in Algorithm 1 and allows using asynchrony as was described in Algorithm 3. We illustrate the communication between POAP and pySOT in Figure 2. pySOT communicates with POAP through the optimization strategy where the pySOT strategy is responsible for proposing an action when different events happen. All of the worker communication is handled by the POAP controller.

Figure 2: Communication between POAP and pySOT

The pySOT objects follow an abstract class definition to make sure that custom implementations fit the framework design. This is achieved by forcing the objects to inherit an ABC object design, which makes it easy for users to add their own implementations. We proceed to describe each object and their role in the pySOT framework.

5.1 Strategies and auxiliary problems

The strategy object follows the POAP framework. pySOT implements an asynchronous base class for surrogate optimization which serves as a template for all surrogate optimizations in pySOT. This base class abstracts out the difference between serial, synchronous parallel, and asynchronous parallel. pySOT supports the candidate point methods SRBF and DYCORS. We also support strategies for the most common acquisition functions from BO: expected improvement (EI) and the lower confidence bound (LCB).

5.2 Experimental design

pySOT implements the symmetric Latin hypercube (SLHD), Latin hypercubes (LHD), and 2-factorial designs that were described in §2.1. The experimental design is always evaluated first and the asynchronous optimization strategy in pySOT is designed to proceed to the adaptive phase as soon as no initial design points are outstanding. Another possibility is to cancel the pending evaluations from the initial phase and proceed to the adaptive phase as soon as possible, but we choose to finish the entire initial design as exploration is important for multi-modal optimization problems. As discussed in the previous section, we must choose enough initial design point to allow building the surrogate model when all points in the initial design are either completed or pending.

5.3 Surrogate models

pySOT supports the many popular surrogate models, including RBFs, GPs, MARS, polynomial regression, and support vector regression. We provide our own RBF implementation that uses the incremental factorization update idea that was described in §2.2.1. Support for MARS is provided via py-earth666 and support for GPs and polynomial regression is provided through scikit-learn Pedregosa et al. (2011)

. The surrogate model does not need access to any of the other objects, as it just constructs a model based on the evaluated points and their values. The surrogate fitting problem may be ill-conditioned if the domain is scaled poorly, and we provide wrappers for rescaling the domain to the unit hypercube, which is particularly useful on problems where the bounds are very skewed. We add regularization to the linear system when radial basis functions are used to keep the system well-conditioned. Previous work has shown that hard-capping of function values can be useful to avoid oscillation, where a common choice is to replace all function values above the median by the median function value, and we provide wrappers for this as well.

5.4 Optimization problems

The optimization problem object specifies the number of dimensions, the number of analytical constraints, and provide methods for evaluating the objective function and the constraints. We provide implementations of many standard test problems which can be used to compare algorithms within the pySOT framework. The optimization problem does not depend on any other objects.

5.5 Checkpointing

Checkpointing is important when optimizing an expensive function since the optimization may run for several days or weeks, and it would be devastating if all information was lost due to e.g., a system or power failure. pySOT supports a controller wrapper for saving the state of the system each time something changes, making it possible to resume from the latest such snapshot.

6 Code examples

This section illustrates how POAP and pySOT can be used to minimize the Ackley test function. We will use the threaded controller and asynchronous function evaluations. The code is based on pySOT version 0.2.3 and POAP version 0.1.26.

Our goal in this example is to minimize the 10-dimensional Ackley function, which is a common test function in global optimization. We use a symmetric Latin hypercube, an RBF surrogate with a cubic kernel and linear tail, and the DYCORS strategy for generating candidate points. Importing the necessary modules can be done as follows: from pySOT.optimization_problems import Ackley from pySOT.experimental_design import SymmetricLatinHypercube from pySOT.surrogate import RBFInterpolant, CubicKernel, LinearTail from pySOT.strategy import DYCORSStrategy from poap.controller import ThreadController, BasicWorkerThread We next create objects for the optimization problem, experimental design, and surrogate model. num_threads = 4 max_evals = 500

dim = 10 ackley = Ackley(dim=dim) rbf = RBFInterpolant( dim=dim, kernel=CubicKernel(), tail=LinearTail(dim)) slhd = SymmetricLatinHypercube(dim=dim, num_pts=2*(dim+1)) We are now ready to launch a threaded controller that will run asynchronous evaluations. We create an instance of the DYCORS strategy and append it to the controller. controller = ThreadController() controller.strategy = DYCORSStrategy( opt_prob=ackley, exp_design=slhd, surrogate=rbf, max_evals=max_evals, asynchronous=True) We need to launch the workers that do function evaluations. In this example we use standard threads and give each worker an objective function handle. for _in range(num_threads): worker = BasicWorkerThread(controller, ackley.eval) controller.launch_worker(worker) The workers have been launched and the optimization strategy has been created, so we are ready to start the optimization run. The following code runs the optimizer and prints the best solution. result = print(”Best value found: 0”.format(result.value)) print(”Best solution found: 0”.format(result.params[0]))

7 Numerical experiments

In this section we study the performance of serial, synchronous parallel, and asynchronous parallel when varying the evaluation time distribution and the number of processors. We focus on the DYCORS method using a cubic RBF surrogate with a linear tail as both have very low overhead, allowing us to run many trials, each with a large number of function evaluations. Previous work has shown that DYCORS outperforms the competing methods for computationally expensive multi-modal functions in a large number of dimensions Regis and Shoemaker (2013).

The evaluation times are drawn from a Pareto distribution with probability density function (PDF) given by:


The Pareto distribution is heavy-tailed for small values of and this case is suitable for studying large variance in the evaluation time. We use so the support is and use different values of to achieve different tail behaviors. This setup models homogeneous resources and spatial dependence. We use

which corresponds to standard deviations

, , and .

We run the serial and synchronous parallel versions with their default hyper-parameters since both methods showed good results in Regis and Shoemaker (2007) and Regis and Shoemaker (2009) respectively. The hyper-parameter values used for the asynchronous algorithm are the same as for the synchronous parallel version except for which we multiply by since we count evaluations rather than batches. The hyper-parameter values are shown in Table 1. We restart the algorithm with a new experimental design if at some point and the algorithm has failed to make a significant improvement in the last evaluations. Our experiments use , , , and workers for the parallel algorithms and we give each algorithm an evaluation budget of evaluations. This is an upper bound for a time budget of units of time. We exclude the overhead from fitting the surrogate and generating candidate points since this is negligible when the function evaluations are truly expensive.

Hyper-parameter Value
(number of candidate points per proposal)
(weight pattern)
(number of weights in )
(Initial step size)
(minimum step size)
(radius tolerance)
(threshold parameter for increasing the step size)
(tolerance parameter for decreasing step size)
(maximum failure tolerance parameter)
Table 1: Hyper-parameter values used for the asynchronous algorithm

We consider the multimodal test problems F15-F24 from the BBOB test suite Hansen et al. (2009). These problems are challenging and non-separable and we use the -dimensional versions for our experiments. The domain for each problem is , and location of the global minimum and the value at the global optimum are generated randomly depending on what instance is being used, where we use instance for each problem. These problems are not expensive to evaluate, but we pretend they are computationally expensive black-box functions and draw the evaluation time from a Pareto distribution.

We compare progress vs time and progress vs number of evaluations. Comparing progress vs time will show what method does well in practice since we are often constrained by a time budget rather than an evaluation budget. We will also be able to see the effect of adding more processors, which is expected to be fruitful since exploration is important for multi-modal problems. We compare progress vs number of evaluations to study the importance of information. The serial and synchronous methods are independent of the evaluation time in this case since there is a barrier after each batch. The asynchronous algorithm is affected by the variance, which affects how much information is available at a given iteration. The serial version always has more points incorporated in the surrogate at a given iteration, but explores less than the parallel versions. Figure 3 shows the experimental results for F15 and F17.

Figure 3:

Progress vs time and progress vs number of evaluation for F15 and F17. The error bars show the standard error based on 100 trials using 1600 evaluations. The plots with respect to number of evaluations are zoomed in to make the lines easier to distinguish.

These problems are chosen because they show the key points we are trying to convey. The first row in each plot shows absolute error vs time and the second row shows absolute error vs number of evaluations. The error is the difference between the best objective function found so far and the value of the true optimization. The absolute error is plotted on a log-scale in all plots to make it easier to interpret the results. This should be taken into account when looking at the error bars, which show the standard error of the estimated mean based on 100 trials. Note that each row has the same range in absolute error to make the comparison easier.

F15 illustrates a case where synchronous and asynchronous parallel perform similarly with respect to time when the variance is small. The difference grows when the variance increases and asynchrony is always superior in the large variance case. Synchrony does slightly better than asynchrony in the small variance case for F17. The results versus number evaluations are interesting and asynchrony with 4 processors is consistently the best choice for both F15 and F17. This is unexpected since the asynchronous versions never have more information than the corresponding synchronous versions, indicating that maximizing information is not as important on multi-modal problems. We also see that the serial version is outperformed by the parallel versions when looking at number of evaluations. This is another indicator that exploration is more important than information.

We can also compare the speedup from using more workers for synchronous and asynchronous parallel. Speedup is measured by the quantity

This requires knowledge of the fastest serial algorithm which is hard to know given randomized initial conditions. We therefore consider relative speedup where is replaced by . We will measure speedup by running synchronous and asynchronous parallel and compare the results to the serial case. We need to estimate the expected value of the speedup since all of our algorithms are stochastic.

A main problem with speedup tests is choosing a good target value where the speedup is measured. For unimodal problems, such a target can be based on a small neighborhood of the global minimum, but this is unreasonable for multimodal test problems. Our approach is to run each algorithm for 100 trials, compute the intersection of the ranges of function values, and compute the speedup for a set of targets within the intersection. This allows us to see how the speedup depends on different target values. Figure 4 shows the speedup for F15 and F17.

Figure 4: Relative speedup for different target values for F15 and F17. The error bars show the standard error based on 100 trials using 1600 evaluations. Results show the speedup of reaching the target values as in equation (12).

We achieve close to linear speedup with asynchrony for small target values when using 4, 8, and 16 processors on F15 in the case of small variance. The speedup is larger for synchrony when we consider 32 processors, but is clearly sub-linear. The speedup is larger for small target values, which indicates that the serial algorithm is more likely to get stuck in a local minimum which triggers a restart. The effect of increasing the variance clearly degrades the performance of synchrony while the results for asynchrony do not change much. The speedup on F17 is generally better for the synchronous algorithm in the case of small variance.

8 Conclusions

We have introduced the event-driven optimization framework POAP, which provides an easy way to build and combine new optimization algorithms for computationally expensive functions. POAP has three main components, a controller, a strategy, and a collection of workers. The controller accepts or rejects proposals from the strategies, monitors the workers, and informs the strategy when new events occur. The strategy proposes actions when an event occurs, such as starting a new function evaluation, re-evaluating an input, or terminating the optimization run. The workers do function evaluations when instructed by the controller and they support partial updates, making it possible to terminate unpromising evaluations. The flexibility of the POAP framework makes it easy to combine optimization strategies.

We have also introduced the inter-operable library pySOT that supports the most popular surrogate optimization methods. pySOT is a collection of synchronous/asynchronous strategies, experimental designs, surrogate models, and auxiliary functions, that are commonly used in surrogate optimization. pySOT comes with a large set of standard test problems and efficiently serves as a test suite in which new optimization algorithms can be compared to existing methods using asynchronous or synchronous parallel. The object oriented design makes it easy to add new functionality and there is also support for resuming crashed and terminated runs.

We have introduced a general asynchronous surrogate optimization method for computationally expensive black-box problems that extends the work in Regis and Shoemaker (2009) to the case when function evaluation times are not necessarily constant and computational resources are not necessarily homogeneous. Our version also handles worker failures and evaluation crashes, which was not considered in Regis and Shoemaker (2009).

The numerical experiments show that asynchrony performs similarly to synchrony even when the variance in evaluation time is small. Comparing progress vs number of evaluations showed that the serial method, which maximizes the information at each step, does not outperform the parallel methods. This is likely because exploring is more important than maximizing the information for each sample. The implication is that idle time is more important than information and the asynchronous method clearly outperforms the synchronous method when we increase the variance in evaluation time or number of processors. We also studied a unimodal function, in which case the serial method performs best when we compare progress vs number of evaluations. This is expected since exploration is fruitless for unimodal functions.

A relative speedup analysis shows good results for the asynchronous method and we achieve close to linear speedup for 4, 8, and 16 processors. The speedup for the synchronous method clearly decreases when the variance of the evaluation time increases, which is expected since this increases idle time. We conclude that for multi-modal problems asynchrony should be preferred over synchrony and that adding more processors leads to faster convergence.


The authors appreciated support from NSF CISE 1116298 to Prof. Shoemaker and Bindel, and Prof. Shoemaker’s start up grant from National University of Singapore. We also thank Dr. Taimoor Akhtar for his incorporation of multi-objective code into pySOT.


  • Regis and Shoemaker (2007) Rommel G Regis and Christine A Shoemaker. A stochastic radial basis function method for the global optimization of expensive functions. INFORMS Journal on Computing, 19(4):497–509, 2007.
  • Snoek et al. (2012) Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, pages 2951–2959, 2012.
  • Avriel (2003) Mordecai Avriel. Nonlinear programming: analysis and methods. Courier Corporation, 2003.
  • Hooke and Jeeves (1961) Robert Hooke and Terry A Jeeves. “direct search”solution of numerical and statistical problems. Journal of the ACM (JACM), 8(2):212–229, 1961.
  • Nelder and Mead (1965) John A Nelder and Roger Mead. A simplex method for function minimization. The computer journal, 7(4):308–313, 1965.
  • Wild and Shoemaker (2011) Stefan M Wild and Christine Shoemaker. Global convergence of radial basis function trust region derivative-free algorithms. SIAM Journal on Optimization, 21(3):761–781, 2011.
  • Goldberg (2006) David E Goldberg. Genetic algorithms. Pearson Education India, 2006.
  • Kennedy (2010) James Kennedy. Particle swarm optimization. In Encyclopedia of Machine Learning, pages 760–766. Springer, 2010.
  • Storn and Price (1997) Rainer Storn and Kenneth Price. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 11(4):341–359, 1997.
  • Regis and Shoemaker (2013) Rommel G Regis and Christine A Shoemaker. Combining radial basis function surrogates and dynamic coordinate search in high-dimensional expensive black-box optimization. Engineering Optimization, 45(5):529–555, 2013.
  • Gutmann (2001) H-M Gutmann. A radial basis function method for global optimization. Journal of Global Optimization, 19(3):201–227, 2001.
  • Jones (2001) Donald R Jones. A taxonomy of global optimization methods based on response surfaces. Journal of global optimization, 21(4):345–383, 2001.
  • Jones et al. (1998) Donald R Jones, Matthias Schonlau, and William J Welch. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492, 1998.
  • Frazier et al. (2008) Peter I Frazier, Warren B Powell, and Savas Dayanik. A knowledge-gradient policy for sequential information collection. SIAM Journal on Control and Optimization, 47(5):2410–2439, 2008.
  • Friedman (1991) Jerome H Friedman. Multivariate adaptive regression splines. The annals of statistics, pages 1–67, 1991.
  • Müller and Shoemaker (2014) Juliane Müller and Christine A Shoemaker. Influence of ensemble surrogate models and sampling strategy on the solution quality of algorithms for computationally expensive black-box global optimization problems. Journal of Global Optimization, 60(2):123–144, 2014.
  • Regis and Shoemaker (2009) Rommel G Regis and Christine A Shoemaker. Parallel stochastic global optimization using radial basis functions. INFORMS Journal on Computing, 21(3):411–426, 2009.
  • Krityakierne et al. (2016) Tipaluck Krityakierne, Taimoor Akhtar, and Christine A Shoemaker. Sop: parallel surrogate global optimization with pareto center selection for computationally expensive single objective problems. Journal of Global Optimization, 66(3):417–437, 2016.
  • Hold-Geoffroy et al. (2014) Yannick Hold-Geoffroy, Olivier Gagnon, and Marc Parizeau. Once you scoop, no need to fork. In Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment, page 60. ACM, 2014.
  • Plantenga (2009) Todd D Plantenga. Hopspack 2.0 user manual. Sandia National Laboratories Technical Report Sandia National Laboratories Technical Report SAND2009-6265, 2009.
  • Mueller (2014) Juliane Mueller. Matsumoto: The matlab surrogate model toolbox for computationally expensive black-box global optimization problems. arXiv preprint arXiv:1404.4261, 2014.
  • Le Digabel (2011) Sébastien Le Digabel. Algorithm 909: Nomad: Nonlinear optimization with the mads algorithm. ACM Transactions on Mathematical Software (TOMS), 37(4):44, 2011.
  • Eldred et al. (2007) Michael S Eldred, Anthony A Giunta, Bart G van Bloemen Waanders, Steven F Wojtkiewicz, William E Hart, and Mario P Alleva. DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis: Version 4.1 reference manual. Sandia National Laboratories Albuquerque, NM, 2007.
  • Bichon et al. (2008) Barron J Bichon, Michael S Eldred, Laura Painton Swiler, Sandaran Mahadevan, and John M McFarland. Efficient global reliability analysis for nonlinear implicit performance functions. AIAA journal, 46(10):2459–2468, 2008.
  • Martinez-Cantin (2014) Ruben Martinez-Cantin. Bayesopt: A bayesian optimization library for nonlinear optimization, experimental design and bandits. The Journal of Machine Learning Research, 15(1):3735–3739, 2014.
  • Costa and Nannicini (2014) Alberto Costa and Giacomo Nannicini. Rbfopt: an open-source library for black-box optimization with costly function evaluations. Optimization Online, 4538, 2014.
  • Wu and Frazier (2016) Jian Wu and Peter Frazier. The parallel knowledge gradient method for batch bayesian optimization. In Advances in Neural Information Processing Systems, pages 3126–3134, 2016.
  • Wu et al. (2017) Jian Wu, Matthias Poloczek, Andrew G Wilson, and Peter Frazier. Bayesian optimization with gradients. In Advances in Neural Information Processing Systems, pages 5273–5284, 2017.
  • Müller et al. (2013) Juliane Müller, Christine A Shoemaker, and Robert Piché. So-mi: A surrogate model algorithm for computationally expensive nonlinear mixed-integer black-box global optimization problems. Computers & Operations Research, 40(5):1383–1400, 2013.
  • Buhmann (2003) Martin D Buhmann. Radial basis functions: theory and implementations, volume 12. Cambridge university press, 2003.
  • Fasshauer (2007) Gregory E Fasshauer. Meshfree Approximation Methods with Matlab:(With CD-ROM), volume 6. World Scientific Publishing Company, 2007.
  • Schaback and Wendland (2006) Robert Schaback and Holger Wendland. Kernel techniques: from machine learning to meshless methods. Acta numerica, 15:543–639, 2006.
  • Wendland (2004) Holger Wendland. Scattered data approximation, volume 17. Cambridge university press, 2004.
  • Rasmussen and Williams (2006) Carl Edward Rasmussen and Christopher KI Williams. Gaussian process for machine learning. MIT press, 2006.
  • Wilson and Nickisch (2015) Andrew Wilson and Hannes Nickisch. Kernel interpolation for scalable structured gaussian processes (kiss-gp). In International Conference on Machine Learning, pages 1775–1784, 2015.
  • Dong et al. (2017) Kun Dong, David Eriksson, Hannes Nickisch, David Bindel, and Andrew G Wilson. Scalable log determinants for gaussian process kernel learning. In Advances in Neural Information Processing Systems, pages 6330–6340, 2017.
  • Müller and Piché (2011) Juliane Müller and Robert Piché. Mixture surrogate models based on dempster-shafer theory for global optimization problems. Journal of Global Optimization, 51(1):79–104, 2011.
  • Brochu et al. (2010) Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010.
  • Pedregosa et al. (2011) Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825–2830, 2011.
  • Hansen et al. (2009) Nikolaus Hansen, Steffen Finck, Raymond Ros, and Anne Auger. Real-parameter black-box optimization benchmarking 2009: Noiseless functions definitions. PhD thesis, INRIA, 2009.