# Combining Bayesian Optimization and Lipschitz Optimization

Bayesian optimization and Lipschitz optimization have developed alternative techniques for optimizing black-box functions. They each exploit a different form of prior about the function. In this work, we explore strategies to combine these techniques for better global optimization. In particular, we propose ways to use the Lipschitz continuity assumption within traditional BO algorithms, which we call Lipschitz Bayesian optimization (LBO). This approach does not increase the asymptotic runtime and in some cases drastically improves the performance (while in the worst case the performance is similar). Indeed, in a particular setting, we prove that using the Lipschitz information yields the same or a better bound on the regret compared to using Bayesian optimization on its own. Moreover, we propose a simple heuristics to estimate the Lipschitz constant, and prove that a growing estimate of the Lipschitz constant is in some sense "harmless". Our experiments on 15 datasets with 4 acquisition functions show that in the worst case LBO performs similar to the underlying BO method while in some cases it performs substantially better. Thompson sampling in particular typically saw drastic improvements (as the Lipschitz information corrected for it's well-known "over-exploration" phenomenon) and its LBO variant often outperformed other acquisition functions.

## Authors

• 6 publications
• 15 publications
• 45 publications
12/08/2021

### Calibration Improves Bayesian Optimization

Bayesian optimization is a procedure that allows obtaining the global op...
11/21/2015

### Gaussian Process Planning with Lipschitz Continuous Reward Functions: Towards Unifying Bayesian Optimization, Active Learning, and Beyond

This paper presents a novel nonmyopic adaptive Gaussian process planning...
03/30/2012

### A Lipschitz Exploration-Exploitation Scheme for Bayesian Optimization

The problem of optimizing unknown costly-to-evaluate functions has been ...
04/30/2020

### The Lipschitz Constant of Perturbed Anonymous Games

The worst-case Lipschitz constant of an n-player k-action δ-perturbed ga...
03/07/2017

### Global optimization of Lipschitz functions

The goal of the paper is to design sequential strategies which lead to e...
03/30/2016

### Robustness of Bayesian Pool-based Active Learning Against Prior Misspecification

We study the robustness of active learning (AL) algorithms against prior...
04/30/2021

### ModelGuard: Runtime Validation of Lipschitz-continuous Models

This paper presents ModelGuard, a sampling-based approach to runtime mod...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Bayesian optimization (BO) has a long history and has been used in a variety of fields (see Shahriari et al, 2016)

, with recent interest from the machine learning community in the context of automatic hyper-parameter tuning

(Snoek et al, 2012; Golovin et al, 2017). BO is an example of a global black-box optimization algorithm (Hendrix et al, 2010; Jones et al, 1998; Pintér, 1991; Rios and Sahinidis, 2013) which optimizes an unknown function that may not have nice properties such as convexity. In the typical setting, we assume that we only have access to a black box that evaluates the function and that it is expensive to do these evaluations. The objective is to find a global optimum of the unknown function with the minimum number of function evaluations.

The global optimization of a real-valued function is impossible unless we make assumptions about the structure of the unknown function. Lipschitz continuity (that the function can’t change arbitrarily fast as we change the inputs) is one of the weakest assumptions under which optimizing an unknown function is still possible. Lipschitz optimization (Piyavskii, 1972; Shubert, 1972) (LO) exploits knowledge of the Lipschitz constant of the function (a bound on the amount that it can change) to prune the search space in order to locate the optimum. In contrast, Bayesian optimization makes a stronger assumption that the unknown function belongs to a known model class (typically a class of smooth functions), the most common being a Gaussian process (GP) generated using a Gaussian or Matérn kernel (Stein, 2012). We review LO and BO in Section 2.

Under their own specific sets of additional assumptions, both BO (Bull, 2011, Theorem 5) and LO (Malherbe and Vayatis, 2017) can be shown to be exponentially faster than random search strategies. If the underlying function is close to satisfying the stronger BO assumptions, then typically BO is able to optimize functions faster than LO. However, when these assumptions are not reasonable, BO may converge slower than simply trying random values (Li et al, 2016; Ahmed et al, 2016). On the other hand, LO makes minimal assumptions (not even requiring differentiability111The absolute value function is an example of a simple non-differentiable but Lipschitz-continuous function.) and simply prunes away values of the parameters that are not compatible with the Lipschitz condition and thus cannot be solutions. This is useful in speeding up simple algorithms like random search. Given a new function to optimize, it is typically not clear which of these strategies will perform better.

In this paper, we propose to combine BO and LO to exploit the advantages of both methods. We call this Lipschitz Bayesian Optimization (LBO). Specifically, in Section 3, we design mixed acquisition functions that use Lipschitz continuity in conjunction with existing BO algorithms. We also address the issue of providing a “harmless” estimate of the Lipschitz constant (see Section 2.3), which is an important practical issue for any LO method. Our experiments (Section 4) indicate that in some settings the addition of estimated Lipschitz information leads to a huge improvement over standard BO methods. This is particularly true for Thompson sampling, which often outperforms other standard acquisition functions when augmented with Lipschitz information. This seems to be because the estimated Lipschitz continuity seems to correct for the well-known problem of over-exploration (Shahriari et al, 2014). Further, our experiments indicate that it does not hurt to use the Lipschitz information since even in the worst case it does not change the runtime or the performance of the method.

## 2 Background

We consider the problem of maximizing a real-valued function with parameters over a compact set . We assume that on iteration , an algorithm chooses a point and then receives the corresponding function value . Typically, our goal is to find the largest possible across iterations. We describe two approaches for solving this problem, namely BO and LO, in detail below.

### 2.1 Bayesian Optimization

BO methods are typically based on Gaussian processes (GPs), since they have appealing universal consistency properties over compact sets and admit a closed-form posterior distribution (Rasmussen and Williams, 2006). BO methods typically assume a smooth GP prior on the unknown function, and use the observed function evaluations to compute a posterior distribution over the possible function values at any point . At iteration , given the previously selected points and their corresponding observations , the algorithm uses an acquisition function (based on the GP posterior) to select the next point to evaluate. The value of the acquisition function at a point characterizes the importance of evaluating that point in order to maximize . To determine , we usually maximize this acquisition function over all using an auxiliary optimization procedure (typically we can only approximately solve this maximization).

We now formalize the above high-level procedure. We assume that follows a distribution where is a kernel function which quantifies the similarity between points and . Throughout this paper, we use the Matérn kernel for which where . Here the hyper-parameter is referred to as the length-scale for dimension and dictates the extent of smoothness we assume about the function in direction . The hyper-parameter quantifies the amount of noise we expect in the function values.

We denote the maximum value of the function until iteration as and the set as . Let and let us denote the kernel matrix as (so for all ). Given the function evaluations (observations), the posterior distribution at point after iterations is given as

. Here, the mean and standard deviation of the function at

are given as:

 μt(x) =kt(x)T(K+σ2It)−1yt, σt(x) =k(x,x)−kt(x)T(K+σ2It)−1kt(x). (1)

As alluded to earlier, an acquisition function uses the above posterior distribution in order to select the next point to evaluate the function at. A number of acquisition functions have been proposed in the literature, with the most popular ones: (UCB) (Srinivas et al, 2010), Thompson sampling (TS) (Thompson, 1933), expected improvement (EI) (Močkus, 1975)

, probability of improvement (PI)

(Kushner, 1964), and entropy search (Villemonteix et al, 2009; Hennig and Schuler, 2012; Hernández-Lobato et al, 2014). In this work, we focus on four simple widely-used acquisition functions: UCB, TS, EI, and PI. However, we expect that our conclusions would apply to other acquisition functions. For brevity, when defining the acquisition functions, we drop the subscripts from , , and .
UCB: The acquisition function is defined as:

 UCB(x)=μ(x)+β1/2tσ(x) (2)

Here, is positive parameter that trades off exploration and exploitation.
TS: For TS, in each iteration we first sample a function from the GP posterior, . TS then selects the point which maximizes this deterministic function .
PI: We define the possible improvement (over the current maximum) at as and the indicator of improvement u(x) as

 u(x))={0,if f(x)

PI selects the point which maximizes the probability of improving over . If and

are the probability density function and the cumulative distribution function for the standard normal distribution

, then the PI acquisition function is given as (Kushner, 1964):

 PI(x)=∫∞−∞u(x)ϕ(f(x))df=Φ(z(x,y∗)) (3)

where we have defined .
EI: EI selects an that maximizes , where the expectation is over the distribution . If is the pdf of the standard normal distribution, the expected improvement acquisition function can be written as (Močkus, 1975):

 EI(x)=∫∞−∞I(x)ϕ(f(x))df=∫∞y∗(f(x)−y∗)ϕ(f(x))df=σ(x)⋅[z(x,y∗)⋅Φ(z(x,y∗))+ϕ(z(x,y∗))] (4)

### 2.2 Lipschitz Optimization

As opposed to assuming that the function comes from a specific family of functions, in LO we simply assume that the function cannot change too quickly as we change . In particular, we say that a function is Lipschitz-continuous if for all and we have

 |f(x)−f(y)|≤L||x−y||2, (5)

for a constant which is referred to as the Lipschitz constant. Note that unlike typical priors used in BO (like the Gaussian or Matérn kernel), a function can be non-smooth and still be Lipschitz continuous.

Lipschitz optimization uses this Lipschitz inequality in order to test possible locations for the maximum of the function. In particular, at iteration the Lipschitz inequality implies that the function’s value at any can be upper and lower bounded for any by

 f(xi)−L||x−xi||2≤f(x)≤f(xi)+L||x−xi||2.

Since the above inequality holds simultaneously for all , for any the function value can be bounded as:

 flt−1(x)≤f(x)≤fut−1(x), where,
 flt−1(x)=maxi∈[t−1]{f(xi)−L||x−xi||2} fut−1(x)=mini∈[t−1]{f(xi)+L||x−xi||2} (6)

Notice that if , then cannot achieve a higher function value than our current maximum .

To exploit these bounds, at each iteration of a typical Lipschitz optimization (LO) method, Malherbe and Vayatis (2017) might sample points uniformly at random from until it finds an that satisfies . If we know the Lipschitz constant (or use a valid upper bound on the minimum value), this strategy may prune away large areas of the space while guaranteeing that we do not prune away any optimal solutions. This can substantially decrease the number of function values needed to come close to the global optimum compared to using random points without pruning.

A major drawback of Lipschitz optimization is that in most applications we do not know a valid . We discuss this scenario in the next section, but first we note that there exist applications where we do have access to a valid . For example, Bunin and François (2016) discuss cases where can be dictated by the physical laws of the underlying process (e.g., in heat transfer, solid oxide fuel-cell system, and polymerization). Alternately, if we have a lower and an upper bound on the possible values that the function can take, then we can combine this with the size of to obtain an over-estimate of the minimum value.

### 2.3 Harmless Lipschitz Optimization

When our black-box functions arises from a real world process, a suitable value of is typically dictated by physical limitations of the process. However, in practice we often do not know and thus need to estimate it. A simple way to obtain an under-estimate of at iteration is to use the maximum value that satisfies the Lipschitz inequality across all pairs of points,

 \definecolor[named]pgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0Llbt=maxi,j∈[t];xi≠xj{|f(xi)−f(xj)|||xi−xj||2}. (7)

Note that this estimate monotonically increases as we see more examples, but that it may be far smaller than the true value. A common variation is to sample several points on a grid (or randomly) to use in the estimate above. Unfortunately, without knowing the Lipschitz constant we do not know how fine this grid should be so in general this may still significantly under-estimate the true quantity.

A reasonable property of any estimate of that we use is that it is “harmless” in the sense of Ahmed et al (2016). Specifically, the choice of should not make the algorithm converge to the global optimum at a slower speed than random guessing (in the worst case). If we have an over-estimate for the minimum possible value of , then the LO algorithm is harmless as it can only prune values that cannot improve the objective function (although if we over-estimate it by too much then it may not prune much of the space). However, the common under-estimates of discussed in the previous paragraph are not harmless since they may prune the global optima.

We propose a simple solution to the problem that LO is not harmless if we don’t have prior knowledge about : we use a growing estimate of . The danger in using a growing strategy is that if we grow too slowly then the algorithm may not be harmless. However, in the appendix we show that LO is “harmless” for most reasonable strategies for growing . This result is not prescriptive in the sense that it does not suggest a practical strategy for growing (since it depends on the true ), but this result shows that even for enormous values of that an estimate would have to be growing exceedingly slowly in order for it to not be harmless (exponentially-slow in the minimum value of , the dimensionality, and the desired accuracy). In our experiments we simply use , the under-estimator multiplied by the (growing) iteration number and a constant (a tunable hyper-parameter). In Section 4, we observe that this choice of with consistently works well across 14 datasets with 4 different acquisition functions.

## 3 Lipschitz Bayesian optimization

In this section, we show how simple changes to the standard acquisition functions used in BO allow us to incorporate the Lipschitz inequality bounds. We call this Lipschitz Bayesian Optimization (LBO). LBO prevents BO from considering values of that cannot be global maxima (assuming we have over-estimated ) and also restricts the range of values considered in the acquisition function to those that are consistent with the Lipschitz inequalities. Figure 1 illustrates the key features of BO, LO, and LBO. It is important to note that the Lipschitz constant has a different interpretation than the length-scale of the GP. The constant specifies an absolute maximum rate of change for the function, while specifies how quickly a parameterized distance between pairs of points changes the GP. We also note that the computational complexity of using the Lipschitz inequalities is which is cheaper than the cost of (exact) inference in the GP.

We can use the Lipschitz bounds to restrict the limits of the unknown function value for computing the improvement. The upper bound will always be , while the lower bound will depend on the relative value of . In particular, we have the following two cases:

 L={y∗,if y∗∈(fl(x),fu(x))fu(x),if y∗∈(fu(x),∞).

The second case represents points that cannot improve over the current best value (that are “rejected” by the Lipschitz inequalities).
Truncated-PI: We can define a similar variant for the PI acquisition function as:

 TPI(x)=Φ(z(x,L))−Φ(z(x,U)) (8)

Truncated-EI: Using the above bounds, the truncated expected improvement for point is given by:

 TEI(x) =−σ(x)⋅z(x,y∗)[Φ(z(x,L))−Φ(z(x,U)] +σ(x)⋅[ϕ(z(x,L)−ϕ(z(x,U)]] (9)

Note that removing the Lipschitz bounds corresponds to using and , and in this case we recover the usual PI and EI methods in Equations (3) and (4), respectively.

Truncated-UCB: The same strategy can be applied to UCB as follows:

 TUCB(x)=min{μ(x)+β1/2tσ(x),fu(x)} (10)

Accept-Reject: An alternative strategy to incorporate the Lipschitz bounds is to use an accept-reject based mixed acquisition function. This approach uses the Lipschitz bounds as a sanity-check to accept or reject the value provided by the original acquisition function, similar to LO methods. Formally, if is the value of the original acquisition function (e.g. or for TS), then the mixed acquisition function is given as follows:

 ¯¯¯g(x)={g(x),if g(x)∈[fl(x),fu(x)] % (Accept)−∞,othewise  (Reject).

We refer to the accept-reject based mixed acquisition functions as AR-UCB and AR-TS, respectively. Note that the accept-reject method is quite generic and can be used with any acquisition function that has values on the same scale as that of the function. When using an estimate of it is possible that a good point could be rejected because the estimate of is too small, but using a growing estimate ensures that such points can again be selected on later iterations.

### 3.1 Regret bound for AR-UCB

In this section, we show that under reasonable assumptions, AR-UCB is provably “harmless”, in the sense that it retains the good theoretical properties of GP-UCB. We prove the following theorem under the following assumptions:

• The GP is correctly specified and with infinite observations, the posterior distribution will collapse to the “true” function .

• The noise in the observations is small enough for the Lipschitz bounds in Equations 6 to hold.

• The Lipschitz constant is known or has been over-estimated using the techniques described in Section 2.3.

Assumption is a common assumption made for providing theoretical results for GP-UCB (Srinivas et al, 2010). Under these assumptions, we obtain the following theorem (proved in Appendix B):

###### Theorem 1.

Let be a finite decision space and be the standard deviation of the noise in the observations. Let be a positive scalar such that and . If we use the AR-UCB algorithm with assuming that the above conditions - hold, then the expected cumulative regret can be bounded as follows:

 R(T) ≤(8/log(1+σ−2)βTγT√T

Here, refers to the information gain for the selected points and depends on the kernel being used. For the squared exponential kernel, we obtain the following specific bound:

 R(T) ≤(8/log(1+σ−2)βT(log(T))d+1√T

The term can also be bounded for the Matérn kernel following Srinivas et al (2010). The above theorem shows that under reasonable assumptions, using the Lipschitz bounds in conjunction with GP-UCB cannot result in worse regret. We empirically show that if is over-estimated, then AR-UCB matches the performance of GP-UCB in the worst case.

Note that the above theorem assumes that the GP is correctly specified with the correct hyper-parameters. It also assumes that we are able to specify the correct value of the trade-off parameter . These assumptions are not guaranteed to hold in practice and this may result in worse performance of the GP-UCB algorithm. In such cases, our experiments show that using the Lipschitz bounds can lead to better empirical performance than the original GP-UCB.

## 4 Experiments

Datasets: We perform an extensive experimental evaluation and present results on twelve synthetic datasets and three real-world tasks. For the synthetic experiments, we use the standard global-optimization benchmarks namely the Branin, Camel, Goldstein Price, Hartmann (2 variants), Michalwicz (3 variants) and Rosenbrock (4 variants). The closed form and domain for each of these functions is given in Jamil and Yang (2013). As examples of real-world tasks, we consider tuning the parameters for a robot-pushing simulation (2 variants) (Wang and Jegelka, 2017)

and tuning the hyper-parameters for logistic regression

(Wu et al, 2017). For the robot pushing example, our aim is to find a good pre-image (Kaelbling and Lozano-Pérez, 2017) in order for the robot to push the object to a pre-specified goal location. We follow the experimental protocol from Wang and Jegelka (2017) and use the negative of the distance to the goal location as the black-box function to maximize. We consider tuning the robot position , and duration of the push for the 3D case. We also tune the angle of the push to make it a 4 dimensional problem. For the hyper-parameter tuning task, we consider tuning the strength of the regularization (in the range

) , the learning rate for stochastic gradient descent (in the range

) and the number of passes over the data (in the range ). The black-box function is the negative loss on the test set (using a train/test split of ) for the MNIST dataset.
Experimental Setup: For Bayesian optimization, we use a Gaussian Process prior with the Matérn kernel (with a different length scale for each dimension). We modified the publically available BO package pybo of Hoffman and Shahriari (2014) to construct the mixed acquisition functions. All the prior hyper-parameters were set and updated across iterations according to the open-source Spearmint package.In order to make the optimization invariant to the scale of the function values, similar to Spearmint, we standardize the function values; after each iteration, we centre the observed function values by subtracting their mean and dividing by their standard deviation. We then fit a GP to these rescaled function values and correct for our Lipschitz constant estimate by dividing it by the standard deviation. We use DIRECT (Jones et al, 1993) in order to optimize the acquisition function in each iteration. This is one of the standard choices in current works on BO (Eric et al, 2008; Martinez-Cantin et al, 2007; Mahendran et al, 2012), but we expect that Lipschitz information could improve the performance under other choices of the acquisition function optimization approach such as discretization (Snoek et al, 2012), adaptive grids (Bardenet and Kégl, 2010), and other gradient-based methods (Hutter et al, 2011; Lizotte et al, 2012). In order to ensure that Bayesian optimization does not get stuck in sub-optimal maxima (either because of the auxiliary optimization or a “bad” set of hyper-parameters), on every fourth iteration of BO (or LBO) we choose a random point to evaluate rather than optimizing the acquisition function. This makes the optimization procedure “harmless” in the sense that BO (or LBO) will not perform worse than random search (Ahmed et al, 2016). This has become common in recent BO methods such as Bull (2011); Hutter et al (2011); and Falkner et al (2017), and to make the comparison fair we add this “exploration” step to all methods. Note that in the case of LBO we may need to reject random points until we find one satisfying the Lipschitz inequalities (this does not require evaluating the function). In practice, we found that both the standardization and iterations of random exploration are essential for good performance.333Note that we verified that our baseline version of BO performs better than or equal to Spearmint across benchmark problems. All our results are averaged over independent runs, and each of our figures plots the mean and standard deviation of the absolute error (compared to the global optimum) versus the number of function evaluations. For functions evaluated on log scale, we show the 10th and 90thquantiles.
Algorithms compared: We compare the performance of Random search, BO, and LBO methods (using both estimated and True ) for the EI, PI, UCB and TS acquisition functions. The True was estimated offline using a large number of random points. For UCB, we set the trade-off parameter according to Kandasamy et al (2017). For EI and PI, we use Lipschitz bounds to truncate the range of function values for calculating the improvement and use the LBO variants TEI and TPI respectively. For UCB and TS, we use the accept-reject strategy and evaluate the LBO variants AR-UCB and AR-TS respectively. In addition to these, we use random exploration as another baseline. We chose the hyper-parameter (that controls the extent of over-estimating the Lipschitz constant) on the Rosenbrock-4D function and use the best value of for all the other datasets and acquisition functions for both BO and LBO. In particular, we set .

Results: To make the results easier to read, we divide the results into the following groups:

1. LBO provides huge improvements over BO shown in Figure 2. Overall, this represents of all the test cases.

2. LBO provides improvements over BO shown in Figure 3(a). Overall, this represents of all the test cases.

3. LBO performs similar to BO shown in 3(b). Overall, this represents of all the test cases.

4. LBO performs slightly worse than BO shown in Figure 3(c). Overall, this represents of all the test cases.

A comparison of the performance across different acquisition functions (for both BO and LBO) on some of the functions is shown in Figure 4, where we also show an example of UCB where is misspecified. The plots for all functions and methods are available in Appendix C. From these experiments, we can observe:

• LBO can potentially lead to large gains in performance across acquisition functions and datasets, particularly for TS.

• Across datasets, we observe that the gains for EI are relatively small, they are occasionally large for PI and UCB and tend to be consistently large for TS. This can be explained as follows: using EI results in under-exploration of the search space, a fact that has been consistently observed and even theoretically proven by Qin et al (2017). As a result of this, BO does not tend to explore “bad” regions when using EI which results in smaller gains from LBO (on the other hand, it may under-explore).

• TS suffers from exactly the opposite problem: it results in high variance leading to over-exploration of the search space and poor performance. This can be observed in Figures 2(a)2(b) and 2(c) where the performance of TS is near random. This has also been observed and noted by Shahriari et al (2016). For the discrete multi-armed bandit case, Chapelle and Li (2011) multiply the obtained variance estimate by a small number to discourage over-exploration and show that it leads to better results. LBO offers a more principled way of obtaining this same effect and consequently results in making TS more competitive with the other acquisition functions.

• The only functions where LBO slightly hurts are Rosenbrock-4D and Goldstein with UCB and PI.

• For Michalwicz-5D (Figure 4(a)), we see that there is no gain for EI, PI, or UCB. However, the gain is huge for TS functions. In fact, even though TS is the worst performing acquisition function on this dataset, its LBO variant AR-TS gives the best performance across all methods. This demonstrates the possible gain that can be obtained from using mixed acquisition functions.

• We observe a similar trend in Figures 4(b) where LBO improves TS from near-random performance to being competitive with the best performing methods (while it does not adversely affect the methods performing well).

• For the cases where BO performs slightly better than LBO, we notice that the True estimate of provides comparable performance to BO, so the problem can be narrowed down to finding a good estimate of .

• Figure 4(c) shows examples where LBO saves BO with UCB when the parameter is chosen too large (). In this case BO performs near random, but using LBO leads to better performance than random search.

In any case, our experiments indicate that LBO methods rarely hurt the performance of the original acquisition function. Since they have minimal computational or memory requirements and are simple to implement, these experiments support using use the Lipschitz bounds.

## 5 Related work

The Lipschitz condition has been used with BO under different contexts in two previous works (González et al, 2016; Sui et al, 2015). The aim of Sui et al (2015) is to design a “safe” BO algorithm. They assume knowledge of the true Lipschitz constant and exploit Lipschitz continuity to construct a safety threshold in order to construct a “safe” region of the parameter space. This is different than our goal of improving the performance of existing BO methods, and also different in that we estimate the Lipschitz constant as we run the algorithm. On the other hand, González et al (2016) used Lipschitz continuity to model interactions between a batch of points chosen simultaneously in every iteration of BO (referred to as “Batch” Bayesian optimization). This contrasts with our work where we are aiming to improve the performance of existing sequential algorithms (it is possible that our ideas could be used in their framework).

## 6 Discussion

In this paper, we have proposed simple ways to combine Lipschitz inequalities with some of the most common BO methods. Our experiments show that this often gives a performance gain, and in the worst case it performs similar to a standard BO method. Although we have focused on four of the simplest acquisition functions, it seems that these inequalities could be used within other acquisition functions. Further, we expect that the Lipschitz inequalities could also be used in other settings like BO with constraints (Gelbart et al, 2014; Hernández-Lobato et al, 2016; Gardner et al, 2014)

, BO methods based on other model classes like neural networks

(Snoek et al, 2015)(Hutter et al, 2011), and methods that evaluate more than one at a time (Ginsbourger et al, 2010; Wang et al, 2016). Finally, there has been recent interest in first-order Bayesian optimization methods  (Ahmed et al, 2016; Wu et al, 2017). If the gradient is Lipschitz continuous then it is possible to use the descent lemma (Bertsekas, 2016) to obtain Lipschitz bounds that depend on both function values and gradients.

## References

• Ahmed et al (2016) Ahmed MO, Shahriari B, Schmidt M (2016) Do we need “harmless” Bayesian optimization and “first-order” Bayesian optimization? NIPS Workshop on Bayesian Optimization
• Bardenet and Kégl (2010) Bardenet R, Kégl B (2010) Surrogating the surrogate: accelerating gaussian-process-based global optimization with a mixture cross-entropy algorithm. In: International Conference on Machine Learning (ICML), Omnipress, pp 55–62
• Bertsekas (2016) Bertsekas DP (2016) Nonlinear Programming, 3rd edn. MIT
• Bull (2011) Bull AD (2011) Convergence rates of efficient global optimization algorithms. Journal of Machine Learning Research 12(Oct):2879–2904
• Bunin and François (2016) Bunin GA, François G (2016) Lipschitz constants in experimental optimization. arXiv preprint arXiv:160307847
• Chapelle and Li (2011) Chapelle O, Li L (2011) An empirical evaluation of thompson sampling. In: Advances in Neural Information Processing Systems (NIPS), pp 2249–2257
• Eric et al (2008) Eric B, Freitas ND, Ghosh A (2008) Active preference learning with discrete choice data. In: Advances in Neural Information Processing Systems (NIPS), pp 409–416
• Falkner et al (2017) Falkner S, Klein A, Hutter F (2017) Combining hyperband and bayesian optimization. In: NIPS Workshop on Bayesian Optimization
• Gardner et al (2014) Gardner JR, Kusner MJ, Xu ZE, Weinberger KQ, Cunningham JP (2014) Bayesian optimization with inequality constraints. In: International Conference on Machine Learning (ICML), pp 937–945
• Gelbart et al (2014) Gelbart MA, Snoek J, Adams RP (2014) Bayesian optimization with unknown constraints. arXiv preprint arXiv:14035607
• Ginsbourger et al (2010) Ginsbourger D, Le Riche R, Carraro L (2010) Kriging is well-suited to parallelize optimization. In: Computational intelligence in expensive optimization problems, Springer, pp 131–162
• Golovin et al (2017) Golovin D, Solnik B, Moitra S, Kochanski G, Karro J, Sculley D (2017) Google vizier: A service for black-box optimization. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, pp 1487–1495
• González et al (2016)

González J, Dai Z, Hennig P, Lawrence N (2016) Batch Bayesian optimization via local penalization. In: International Conference on Artificial Intelligence and Statistics (AISTATS), pp 648–657

• Hendrix et al (2010) Hendrix EM, Boglárka G, et al (2010) Introduction to nonlinear and global optimization. Springer
• Hennig and Schuler (2012) Hennig P, Schuler CJ (2012) Entropy search for information-efficient global optimization. Journal of Machine Learning Research 13(Jun):1809–1837
• Hernández-Lobato et al (2014) Hernández-Lobato JM, Hoffman MW, Ghahramani Z (2014) Predictive entropy search for efficient global optimization of black-box functions. In: Advances in Neural Information Processing Systems (NIPS), pp 918–926
• Hernández-Lobato et al (2016) Hernández-Lobato JM, Gelbart MA, Adams RP, Hoffman MW, Ghahramani Z (2016) A general framework for constrained Bayesian optimization using information-based search. Journal of Machine Learning Research 17(1):5549–5601
• Hoffman and Shahriari (2014) Hoffman MW, Shahriari B (2014) Modular mechanisms for Bayesian optimization. In: NIPS Workshop on Bayesian Optimization, pp 1–5
• Hutter et al (2011) Hutter F, Hoos HH, Leyton-Brown K (2011) Sequential model-based optimization for general algorithm configuration. In: International Conference on Learning and Intelligent Optimization, Springer, pp 507–523
• Jamil and Yang (2013) Jamil M, Yang XS (2013) A literature survey of benchmark functions for global optimisation problems. International Journal of Mathematical Modelling and Numerical Optimisation 4(2):150–194
• Jones et al (1993) Jones DR, Perttunen CD, Stuckman BE (1993) Lipschitzian optimization without the lipschitz constant. Journal of Optimization Theory and Applications 79(1):157–181
• Jones et al (1998) Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. Journal of Global optimization 13(4):455–492
• Kaelbling and Lozano-Pérez (2017) Kaelbling LP, Lozano-Pérez T (2017) Pre-image backchaining in belief space for mobile manipulation. In: Robotics Research, Springer, pp 383–400
• Kandasamy et al (2017) Kandasamy K, Krishnamurthy A, Schneider J, Poczos B (2017) Asynchronous parallel Bayesian optimisation via thompson sampling. arXiv preprint arXiv:170509236
• Kushner (1964) Kushner HJ (1964) A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. Journal of Basic Engineering 86(1):97–106
• Li et al (2016)

Li L, Jamieson K, DeSalvo G, Rostamizadeh A, Talwalkar A (2016) Efficient hyperparameter optimization and infinitely many armed bandits. arXiv preprint arXiv:160306560

• Lizotte et al (2012) Lizotte DJ, Greiner R, Schuurmans D (2012) An experimental methodology for response surface optimization methods. Journal of Global Optimization 53(4):699–736
• Mahendran et al (2012) Mahendran N, Wang Z, Hamze F, De Freitas N (2012) Adaptive mcmc with bayesian optimization. In: International Conference on Artificial Intelligence and Statistics (AISTATS), pp 751–760
• Malherbe and Vayatis (2017) Malherbe C, Vayatis N (2017) Global optimization of lipschitz functions. In: International Conference on Machine Learning (ICML), pp 2314–2323, URL http://proceedings.mlr.press/v70/malherbe17a.html
• Martinez-Cantin et al (2007) Martinez-Cantin R, de Freitas N, Doucet A, Castellanos JA (2007) Active policy learning for robot planning and exploration under uncertainty. In: Robotics: Science and Systems, vol 3, pp 321–328
• Močkus (1975) Močkus J (1975) On Bayesian methods for seeking the extremum. In: Optimization Techniques IFIP Technical Conference, Springer, pp 400–404
• Pintér (1991) Pintér JD (1991) Global optimization in action. Scientific American 264:54–63
• Piyavskii (1972) Piyavskii S (1972) An algorithm for finding the absolute extremum of a function. USSR Computational Mathematics and Mathematical Physics 12(4):57–67
• Qin et al (2017) Qin C, Klabjan D, Russo D (2017) Improving the expected improvement algorithm. In: Advances in Neural Information Processing Systems (NIPS), pp 5387–5397
• Rasmussen and Williams (2006) Rasmussen CE, Williams CK (2006) Gaussian processes for machine learning. MIT Press
• Rios and Sahinidis (2013) Rios LM, Sahinidis NV (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization 56(3):1247–1293
• Shahriari et al (2014) Shahriari B, Wang Z, Hoffman MW, Bouchard-Côté A, de Freitas N (2014) An entropy search portfolio. In: NIPS Workshop on Bayesian Optimization
• Shahriari et al (2016) Shahriari B, Swersky K, Wang Z, Adams RP, de Freitas N (2016) Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE 104(1):148–175
• Shubert (1972) Shubert BO (1972) A sequential method seeking the global maximum of a function. SIAM Journal on Numerical Analysis 9(3):379–388
• Snoek et al (2012) Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. Advances in Neural Information Processing Systems (NIPS)
• Snoek et al (2015) Snoek J, Rippel O, Swersky K, Kiros R, Satish N, Sundaram N, Patwary M, Prabhat M, Adams R (2015) Scalable Bayesian optimization using deep neural networks. In: International Conference on Machine Learning (ICML), pp 2171–2180
• Srinivas et al (2010) Srinivas N, Krause A, Kakade SM, Seeger M (2010) Gaussian process optimization in the bandit setting: No regret and experimental design. In: International Conference on Machine Learning (ICML), pp 1015–1022
• Stein (2012)

Stein ML (2012) Interpolation of spatial data: some theory for kriging. Springer Science & Business Media

• Sui et al (2015) Sui Y, Gotovos A, Burdick J, Krause A (2015) Safe exploration for optimization with gaussian processes. In: International Conference on Machine Learning (ICML), pp 997–1005
• Thompson (1933) Thompson WR (1933) On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25(3/4):285–294
• Villemonteix et al (2009) Villemonteix J, Vazquez E, Walter E (2009) An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization 44(4):509
• Wang et al (2016) Wang J, Clark SC, Liu E, Frazier PI (2016) Parallel Bayesian global optimization of expensive functions. arXiv preprint arXiv:160205149
• Wang and Jegelka (2017) Wang Z, Jegelka S (2017) Max-value entropy search for efficient bayesian optimization. In: International Conference on Machine Learning (ICML)
• Wu et al (2017) Wu J, Poloczek M, Wilson AG, Frazier P (2017) Bayesian optimization with gradients. In: Advances in Neural Information Processing Systems (NIPS), pp 5267–5278

## References

• Ahmed et al (2016) Ahmed MO, Shahriari B, Schmidt M (2016) Do we need “harmless” Bayesian optimization and “first-order” Bayesian optimization? NIPS Workshop on Bayesian Optimization
• Bardenet and Kégl (2010) Bardenet R, Kégl B (2010) Surrogating the surrogate: accelerating gaussian-process-based global optimization with a mixture cross-entropy algorithm. In: International Conference on Machine Learning (ICML), Omnipress, pp 55–62
• Bertsekas (2016) Bertsekas DP (2016) Nonlinear Programming, 3rd edn. MIT
• Bull (2011) Bull AD (2011) Convergence rates of efficient global optimization algorithms. Journal of Machine Learning Research 12(Oct):2879–2904
• Bunin and François (2016) Bunin GA, François G (2016) Lipschitz constants in experimental optimization. arXiv preprint arXiv:160307847
• Chapelle and Li (2011) Chapelle O, Li L (2011) An empirical evaluation of thompson sampling. In: Advances in Neural Information Processing Systems (NIPS), pp 2249–2257
• Eric et al (2008) Eric B, Freitas ND, Ghosh A (2008) Active preference learning with discrete choice data. In: Advances in Neural Information Processing Systems (NIPS), pp 409–416
• Falkner et al (2017) Falkner S, Klein A, Hutter F (2017) Combining hyperband and bayesian optimization. In: NIPS Workshop on Bayesian Optimization
• Gardner et al (2014) Gardner JR, Kusner MJ, Xu ZE, Weinberger KQ, Cunningham JP (2014) Bayesian optimization with inequality constraints. In: International Conference on Machine Learning (ICML), pp 937–945
• Gelbart et al (2014) Gelbart MA, Snoek J, Adams RP (2014) Bayesian optimization with unknown constraints. arXiv preprint arXiv:14035607
• Ginsbourger et al (2010) Ginsbourger D, Le Riche R, Carraro L (2010) Kriging is well-suited to parallelize optimization. In: Computational intelligence in expensive optimization problems, Springer, pp 131–162
• Golovin et al (2017) Golovin D, Solnik B, Moitra S, Kochanski G, Karro J, Sculley D (2017) Google vizier: A service for black-box optimization. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, pp 1487–1495
• González et al (2016)

González J, Dai Z, Hennig P, Lawrence N (2016) Batch Bayesian optimization via local penalization. In: International Conference on Artificial Intelligence and Statistics (AISTATS), pp 648–657

• Hendrix et al (2010) Hendrix EM, Boglárka G, et al (2010) Introduction to nonlinear and global optimization. Springer
• Hennig and Schuler (2012) Hennig P, Schuler CJ (2012) Entropy search for information-efficient global optimization. Journal of Machine Learning Research 13(Jun):1809–1837
• Hernández-Lobato et al (2014) Hernández-Lobato JM, Hoffman MW, Ghahramani Z (2014) Predictive entropy search for efficient global optimization of black-box functions. In: Advances in Neural Information Processing Systems (NIPS), pp 918–926
• Hernández-Lobato et al (2016) Hernández-Lobato JM, Gelbart MA, Adams RP, Hoffman MW, Ghahramani Z (2016) A general framework for constrained Bayesian optimization using information-based search. Journal of Machine Learning Research 17(1):5549–5601
• Hoffman and Shahriari (2014) Hoffman MW, Shahriari B (2014) Modular mechanisms for Bayesian optimization. In: NIPS Workshop on Bayesian Optimization, pp 1–5
• Hutter et al (2011) Hutter F, Hoos HH, Leyton-Brown K (2011) Sequential model-based optimization for general algorithm configuration. In: International Conference on Learning and Intelligent Optimization, Springer, pp 507–523
• Jamil and Yang (2013) Jamil M, Yang XS (2013) A literature survey of benchmark functions for global optimisation problems. International Journal of Mathematical Modelling and Numerical Optimisation 4(2):150–194
• Jones et al (1993) Jones DR, Perttunen CD, Stuckman BE (1993) Lipschitzian optimization without the lipschitz constant. Journal of Optimization Theory and Applications 79(1):157–181
• Jones et al (1998) Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. Journal of Global optimization 13(4):455–492
• Kaelbling and Lozano-Pérez (2017) Kaelbling LP, Lozano-Pérez T (2017) Pre-image backchaining in belief space for mobile manipulation. In: Robotics Research, Springer, pp 383–400
• Kandasamy et al (2017) Kandasamy K, Krishnamurthy A, Schneider J, Poczos B (2017) Asynchronous parallel Bayesian optimisation via thompson sampling. arXiv preprint arXiv:170509236
• Kushner (1964) Kushner HJ (1964) A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. Journal of Basic Engineering 86(1):97–106
• Li et al (2016)

Li L, Jamieson K, DeSalvo G, Rostamizadeh A, Talwalkar A (2016) Efficient hyperparameter optimization and infinitely many armed bandits. arXiv preprint arXiv:160306560

• Lizotte et al (2012) Lizotte DJ, Greiner R, Schuurmans D (2012) An experimental methodology for response surface optimization methods. Journal of Global Optimization 53(4):699–736
• Mahendran et al (2012) Mahendran N, Wang Z, Hamze F, De Freitas N (2012) Adaptive mcmc with bayesian optimization. In: International Conference on Artificial Intelligence and Statistics (AISTATS), pp 751–760
• Malherbe and Vayatis (2017) Malherbe C, Vayatis N (2017) Global optimization of lipschitz functions. In: International Conference on Machine Learning (ICML), pp 2314–2323, URL http://proceedings.mlr.press/v70/malherbe17a.html
• Martinez-Cantin et al (2007) Martinez-Cantin R, de Freitas N, Doucet A, Castellanos JA (2007) Active policy learning for robot planning and exploration under uncertainty. In: Robotics: Science and Systems, vol 3, pp 321–328
• Močkus (1975) Močkus J (1975) On Bayesian methods for seeking the extremum. In: Optimization Techniques IFIP Technical Conference, Springer, pp 400–404
• Pintér (1991) Pintér JD (1991) Global optimization in action. Scientific American 264:54–63
• Piyavskii (1972) Piyavskii S (1972) An algorithm for finding the absolute extremum of a function. USSR Computational Mathematics and Mathematical Physics 12(4):57–67
• Qin et al (2017) Qin C, Klabjan D, Russo D (2017) Improving the expected improvement algorithm. In: Advances in Neural Information Processing Systems (NIPS), pp 5387–5397
• Rasmussen and Williams (2006) Rasmussen CE, Williams CK (2006) Gaussian processes for machine learning. MIT Press
• Rios and Sahinidis (2013) Rios LM, Sahinidis NV (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization 56(3):1247–1293
• Shahriari et al (2014) Shahriari B, Wang Z, Hoffman MW, Bouchard-Côté A, de Freitas N (2014) An entropy search portfolio. In: NIPS Workshop on Bayesian Optimization
• Shahriari et al (2016) Shahriari B, Swersky K, Wang Z, Adams RP, de Freitas N (2016) Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE 104(1):148–175
• Shubert (1972) Shubert BO (1972) A sequential method seeking the global maximum of a function. SIAM Journal on Numerical Analysis 9(3):379–388
• Snoek et al (2012) Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. Advances in Neural Information Processing Systems (NIPS)
• Snoek et al (2015) Snoek J, Rippel O, Swersky K, Kiros R, Satish N, Sundaram N, Patwary M, Prabhat M, Adams R (2015) Scalable Bayesian optimization using deep neural networks. In: International Conference on Machine Learning (ICML), pp 2171–2180
• Srinivas et al (2010) Srinivas N, Krause A, Kakade SM, Seeger M (2010) Gaussian process optimization in the bandit setting: No regret and experimental design. In: International Conference on Machine Learning (ICML), pp 1015–1022
• Stein (2012)

Stein ML (2012) Interpolation of spatial data: some theory for kriging. Springer Science & Business Media

• Sui et al (2015) Sui Y, Gotovos A, Burdick J, Krause A (2015) Safe exploration for optimization with gaussian processes. In: International Conference on Machine Learning (ICML), pp 997–1005
• Thompson (1933) Thompson WR (1933) On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25(3/4):285–294
• Villemonteix et al (2009) Villemonteix J, Vazquez E, Walter E (2009) An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization 44(4):509
• Wang et al (2016) Wang J, Clark SC, Liu E, Frazier PI (2016) Parallel Bayesian global optimization of expensive functions. arXiv preprint arXiv:160205149
• Wang and Jegelka (2017) Wang Z, Jegelka S (2017) Max-value entropy search for efficient bayesian optimization. In: International Conference on Machine Learning (ICML)
• Wu et al (2017) Wu J, Poloczek M, Wilson AG, Frazier P (2017) Bayesian optimization with gradients. In: Advances in Neural Information Processing Systems (NIPS), pp 5267–5278

## Appendix A Proof for Lipschitz constant estimation

Let be the minimum possible Lipschitz constant for the function , and let be a globally-optimal solution. Here we consider with the number of iterations we need before we have a point satisfying

 f(xt)−f(x∗)≤ε, (11)

for some small .

By the Lipschitz inequality we have

 |f(x)−f(x∗)|≤L∥x−x∗∥,

for any . If we assume that is at least -distance away from the boundary, then this inequality implies that there exists a hyper-sphere around of size where all values satisfy (11). Using that is compact (and thus its total volume is bounded), if we choose a random then the probability that it lands inside this ball is .444If is near the boundary, then the area is still and the argument is unchanged. Since we sample the independently, this probability gives the chance we successfully find an approximate solution on each iteration. The expected number

before we have such a “success” is thus the mean of a geometric random variable with this probability, which means we need

iterations in expectation to find a point satisfying (11).555Instead of “number of iterations to reach ”, we could equivalently state the result in terms “expected error at iteration ” (simple regret) by inverting the relationship between and . This would give an expected error on iteration of .

Unfortunately, if we use an estimate of instead of an satisfying the Lipschitz inequality, we could reject an approximate solution. However, if grows with then eventually it is sufficiently large that we will not reject an approximate solution (unless we already have an -optimal solution). Thus, a crude bound on the expected number of iterations before we find a solution with accuracy is given by , where is the first iteration beyond which we always have . Thus, if we choose the sequence such that , then LO is harmless as it requires the same expected number of iterations as random guessing. A simple example of a sequence of values satisfying this property would be to choose , which grows extremely-slowly (for small and non-trivial or). Larger sequences would imply a smaller and hence also would be harmless.

## Appendix B Regret Bound

###### Theorem 2.

Let be a finite decision space and be the standard deviation of the noise in the observations. Let be a positive scalar such that and . If we use the AR-UCB algorithm with assuming that the above conditions - hold, then the expected cumulative regret can be bounded as follows:

 R(T) ≤(8/log(1+σ−2)βTγT√T

Here, refers to the information gain for the selected points and depends on the kernel being used. For the squared exponential kernel, we obtain the following specific bound:

 R(T) ≤(8/log(1+σ−2)βT(log(T))d+1√T
###### Proof.
 By definition of Lipschitz bounds and assuming we know the true Lipschitz constant L, at iteration t, for all x, flt−1(x) ≤f(x)≤fut−1(x) (13)

We now use the following lemma from Srinivas et al (2010):

###### Lemma 1 (Lemma 5.1 in Srinivas et al (2010)).

Denoting as a finite decision space, let and . Choose where . Then, for all and , with probability ,

 |f(x)−μt−1(x)|≤β1/2tσt−1(x) (14)

For the point selected at round , the following relation holds because of the Accept-Reject condition:

 flt−1(xt) ≤μt−1(xt)+β1/2tσt−1(xt)≤fut−1(xt) (15) The following holds because of the definition of the UCB rule: μt−1(xt)+β1/2tσt−1(xt) ≥μt−1(x∗)+β1/2tσt−1(x∗) (16) From Equations 13 and 14, f(x∗) ≤min{fut−1(x∗),μt−1(x∗)+β1/2tσt−1(x∗)} (17) From Equations 14 and 15 μt−1(xt)+β1/2tσt−1(xt) ≤min{f(xt)+2β1/2tσt−1(xt),fut−1(xt)} (18)
 Let rt be the instantaneous regret in round t. Then, rt =f(x∗)−f(xt) ≤min{fut−1(x∗),μt−1(x∗)+β1/2tσt−1(x∗)}−f(xt) (From Equation 17) ≤min{fut−1(x∗),μt−1(xt)+β1/2tσt−1(xt)}−f(xt) (From Equation 16) =min{fut−1(x∗)−f(xt),μt−1(xt)+β1/2tσt−1(xt)−f(xt)} (min{a,b}−c=min{a−c,b−c}) ≤μt−1(xt)+β1/2tσt−1(xt)−f(xt) (min{a,b}≤b) ≤min{f(xt)+2β1/2tσt−1(xt),fut−1(xt)}−f(xt) (From Equation 18) =min{2β1/2tσt−1(xt),fut−1(xt)−f(xt)} (min{a,b}−c=min{a−c,b−c}) ⟹rt ≤min{2β1/2tσt−1(xt),fut−1(xt)−flt−1(xt)} (From Equation 13)
 Let us now consider the term fut−1(xt)−flt−1(xt) fut−1(xt)−flt−1(xt) =mini∈[t−1]{f(xi)+L||xt−xi||2}−maxi∈[t−1]{f(xi)−L||xt−xi||2} (By Equation  6) =mini∈[t−1]{f(xi)+L||xt−xi||2}+mini∈[t−1]{−f(xi)+L||xt−xi||2} (−max{a,b}=min{−a,−b}) ≤mini∈[t−1]{f(xi)+L||xt−xi||2−f(xi)+L||xt−xi||2} (min{ai+bi}≥min{ai}+min{bi}) ⟹fut−1(xt)−flt−1(xt) ≤2Lmini∈[t−1]{||xt−xi||2}
 From the above equations, rt ≤min{2β1/2tσt−1(xt),2Lmini∈[t−1]{||xt−xi||2}} Let R(T) be the cumulative regret after T rounds. R(T) =T∑t=1rt≤T∑t=1[min{2β1/2tσt−1(xt),2Lmini∈[t−1]{||xt−xi||2}}] R(T) ≤min{2T∑t=1β1/2tσt−1(xt),2LT∑t=1mini∈[t−1]{||xt−xi||2}} (min{∑iai}≥∑imin{ai})

We now bound the term using the lemma in Srinivas et al (2010) which we restate next:

###### Lemma 2 (Lemma 5.4 in Srinivas et al (2010)).

Choosing ,

 2T∑t=1β1/2tσt−1(xt) ≤C1γT√T

where . Here refers to the information gain for the selected points.

Using the above lemma, we obtain the following bound:

 R(T) ≤min{C1γT√T,2LT∑t=1mini∈[t−1]{||xt−xi||2}} ⟹R(T) ≤(8/log(1+σ−2)βTγT√T

## Appendix C Additional Experimental Results

Below we show the results of all the experiments for all the datasets as follows:

• Figure 5 shows the performance of Random search, BO, and LBO (using both estimated and True ) for the TS acquisition function.

• Figure 6 shows the performance of Random search, BO, and LBO (using both estimated and True ) for the UCB acquisition function.

• Figure 7 shows the performance of Random search, BO, and LBO (using both estimated and True ) for the EI acquisition function.

• Figure 8 shows the performance of Random search, BO, and LBO (using both estimated and True ) for the PI acquisition function.

• Figure 9 shows the performance of BO and LBO using the estimated for the all acquisition function.

• Figure 10 shows the performance of Random search, BO, and LBO (using both estimated and True ) for the UCB acquisition function with very large .