    # A Lipschitz Exploration-Exploitation Scheme for Bayesian Optimization

The problem of optimizing unknown costly-to-evaluate functions has been studied for a long time in the context of Bayesian Optimization. Algorithms in this field aim to find the optimizer of the function by asking only a few function evaluations at locations carefully selected based on a posterior model. In this paper, we assume the unknown function is Lipschitz continuous. Leveraging the Lipschitz property, we propose an algorithm with a distinct exploration phase followed by an exploitation phase. The exploration phase aims to select samples that shrink the search space as much as possible. The exploitation phase then focuses on the reduced search space and selects samples closest to the optimizer. Considering the Expected Improvement (EI) as a baseline, we empirically show that the proposed algorithm significantly outperforms EI.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In many applications, we would like to optimize an unknown function that is costly to evaluate over a compact input space. Classic optimization methods, such as gradient descent, cannot be applied to this type of problems since they need to evaluate the function frequently. In contrast, Bayesian Optimization (BO) [10, 4] algorithms try to solve this problem with a small number of function evaluations. Bayesian optimization algorithms, generally, have two key components: 1) A posterior model to predict the output value of the function at any arbitrary input point, and 2) A selection criterion to determine which point to be evaluated next.

The first step of a BO algorithm is to learn a posterior probabilistic model over unobserved points of the function. Gaussian processes (GP) 

have been used in the literature of Bayesian optimization as the probabilistic posterior model. GP models the function output for any unobserved point in the input space as a normal random variable, whose mean and variance depend on the location of the point in relation to a set of given observed samples. Based on the learned posterior model, a selection criterion is then used to choose the next sample to be evaluated. A number of selection criteria have been proposed in the literature of Bayesian optimization. They typically work by selecting an example that optimizes some objective function designed to balance between exploring unobserved area and exploiting areas that are promising based on existing observations. Maximum probability of improvement

[8, 19] and maximum expected improvement (EI)  are two successful examples.

In this paper, we focus on the design of the selection criterion for Bayesian optimization. In particular, we study BO in a sequential setting [10, 15], where the samples are chosen sequentially and a selection is made only after the function evaluations of the previous samples are revealed. We make a mild assumption that the unknown function is Lipschitz-continuous. Leveraging the Lipschitz property, we design a selection algorithm that operates in two distinct phases: the exploration phase and the exploitation phase. In general, in the context of Bayesian optimization  and bandit problems , the exploration phase selects sample from unexplored area while the exploitation focuses on promising area. In this paper, we introduce a new interpretation of exploration and exploitation.

The exploration phase of the proposed algorithm, at each step, selects a sample that eliminates the largest possible portion of the input space while guaranteeing, with high probability, that the eliminated part does not include the maximizer of the function. Hence, the exploration stage of the algorithm tries to shrink the search space of the function as much as possible. In contrast, the exploitation phase of our algorithm selects the point which is believed to be the closest sample to the optimal point with high probability.

Experimental results over real and synthetic benchmarks indicate that the proposed approach is able to outperform the Expected Improvement (EI) criterion, one of the current state-of-the-art BO selection methods. In particular, we show that our algorithm is better than EI both in terms of the mean and variance of the performance. We also investigate whether combining our exploration stage with EI can boost the performance of EI. However, the results were negative. Sometimes it helps and sometimes it hurts and on average we observe little to no improvement to EI. This is possibly because our exploration method actively aims to eliminate regions from the input space and the EI criterion does not take that into consideration when selecting samples.

The remainder of the paper is organized as follows. In Section 2, we motivate the use of exploration-exploitation Bayesian optimization by analyzing the behavior of EI. Section 3 introduces our algorithm and provides insights into both theoretical and practical aspects of the algorithm. Experimental evaluation of our algorithm is shown in Section 4. Finally, the paper is concluded in Section 5.

## 2 Motivating Observation

In this section, we motivate our approach by revealing a key observation about the well known Expected Improvement (EI) algorithm . The original EI is defined as

 EI(x)=E[(f(x)−ymax)I{f(x)−ymax>0}], (1)

where is the indicator function. Hence, it measures the expected improvement of the choice of over the current maximum function evaluations over observed samples.Using Gaussian Process (GP)  as the posterior model of the unknown function, the EI objective can be represented by

 EI(x|O) =(μx|O−ymax)Φ(μx|O−ymaxσx|O)+σx|Oϕ(μx|O−ymaxσx|O), (2)

where, and

are the mean and standard deviation associated with the point

by GP, and, and are standard Gaussian CDF and PDF, respectively. Here, is the set of observed samples with their function evaluations and define . Further, the means and variances are defined as follows:

 μx|O =k(x,xO)k(xO,xO)−1f(xO) σ2x|O =k(x,x)−k(x,xO)k(xO,xO)−1k(xO,x),

where is some kernel function. In this paper, we consider Gaussian kernel .

EI has been widely used and studied; however, there has been always a concern about balancing the exploration and exploitation of EI. The main reason for this concern is that even though the asymptotic convergence of EI is guaranteed under certain conditions , EI tries to exploit the information and potentially can request a lot of samples if it hits a local optimum region, while we have a limited number of experiments. There has been some attempts in the literature to address this concern with varying degrees of success, which we briefly discuss here.

• Considering the original definition of EI, researchers have proposed to replace with a smaller value to make EI more exploitative and with a larger value to make it more explorative. In particular,  suggested and  suggested to replace . However, this approach has not seen much empirical success.  showed that starting with large values of (to be explorative in the beginning) and cooling it down (to make it more and more exploitative) makes little or no difference in the performance of EI.

• On a separate line of work,  proposed to consider a surrogate function

 EIξ(x)=E[(f(x)−ymax)ξI{f(x)−ymax>0}].

For , this objective tries to improve over (exploiting mode) and if we decrease it starts to explore uncertain areas (exploration mode). This method is very sensitive to small changes in and except for very specific setup like the one used in , there is no systematic way to choose . This makes it nearly impossible to use this method.

• The third proposal is to have a “random” exploration phase proceeding EI. In this approach, we take a number of random samples before switching to EI. We analyzed this method in Fig. 1. For a fixed budget , we run experiments as follows: first we consider the case where there is random sample followed by samples selected by the EI criterion, next we consider the case where there are random samples followed by EI samples and so on. The purpose of this investigation is to understand whether exploring with random samples prior to selecting with EI can improve the performance of EI, and if so how much exploring is necessary. We run this experiments on a number of different functions introduced in Section 4. These experiments reveal that “random” exploration never helps EI, since the regret monotonically increases as we increase the number of random samples from 1 to

. One possible explanation for this behavior is that the values of the function are highly correlated and hence, uniform sampling does not efficiently represent the skewness of the data points. Figure 1: Plot of regret versus the number of random exploration for EI algorithm. For a fixed budget nb, we run a number of experiments as follows: first we consider the case where there are 1 random samples followed by nb−1 EI samples, next we consider the case where there are 2 random samples followed by nb−2 EI samples and so on. For 2D and 3D functions, we let nb=15 and for high-dimensional functions, we let nb=35. This result shows that the best EI performance is when we do not do random exploration.

Based on the existing literature as well as our empirical investigation of EI discussed above, we would like to know whether or not it is possible to design an algorithm that operates in two naturally defined phases of exploration and exploitation and achieves consistently better performance than EI. We devote the next section to answer this question and introduce our proposed algorithm.

## 3 Finite Horizon Bayesian Optimization

Not being able to balance the exploration-exploitation, EI might have poor performance especially when the query budget is small. In this section, we propose a two-phase exploration/exploitation algorithm that outperforms EI with its smart exploration and exploitation.

### 3.1 Exploration

Generally, a good exploration algorithm should be able to shrink the search space, so that we are left with a small region to focus on during the exploit stage. Let be the Cartesian product of intervals for some and . Suppose the unknown function (with ) is a Lipschitz function over with constant , that is for all , we have

 |f(x1)−f(x2)|≤L∥x1−x2∥2.

Notice that if the function is not Lipschitz, then there is no hope that we can find the global optimum of even with infinitely countable evaluations. Thus, the Lipschitz continuity assumption is not a strong assumption. Moreover, functions with larger are harder to optimize since they change more abruptly over the space.

For any point , let be the associated radius to the point . By Lipschitz continuity assumption, we know that , where, is the set of all points inside the sphere (or circle) with radius centered at (and single point if ); otherwise, the Lipschitz assumption is violated. This means if we have a sample at point , then we do not need any more samples inside .

The expected value of satisfies . Since is a normal random variable , using Hoeffding inequality for all , we have

 P[rx<|M−μx|L−ϵ] ≤exp(−2ϵ2L2σ2x).

Replacing with , the above inequality entails that with high probability ( 99%), . Hence, a “good” algorithm for exploration should try to find that maximizes the lower bound on . This choice of will remove a large volume of points from the search space. Note, however, if is close to the boundaries of , then it might be the case that most of the volume of the sphere lies outside . Also, the sphere associated with might have significant overlap with spheres of other points that are already selected. To fix this issue, we pick the point whose sphere has the largest intersection with unexplored search space in terms of its volume. The pseudo code of this method is described in Algorithm 1, which we refer to as the Next Best exploRative Sample (NBRS) algorithm. NBRS achieves the optimal exploration in the sense that it maximizes the expected explored volume.

The value of might be negative, especially for large values of . This artifact happens at points that are “far” from previously observed samples. To prevent/minimize this, we need to make sure that the observed samples affect the mean and variance of all points in the space. For example, if we use the Gaussian kernel for exploration, then we need to choose large enough to make sure each observed sample affects all the points in the space, e.g., . If we pick small , then the exploration algorithm starts exploring around the previous samples and extend the explored area gradually to reach to the other side of the search space. This strategy is not optimal if we have limited samples for exploration.

To implement NBRS, we need to maximize the volume

 g(x)=Vol(Dt∩S(x,∣∣M−μx|O∣∣−1.5σx|OL))

where represents the current unexplored input space. To evaluate , we take a large number of points inside the sphere uniformly at random. Then, for each point, we check if it crosses the borders or falls into the spheres of previously observed samples. If not, we count that point as a newly explored point. Finally, if there are newly explored points, then we set .

To optimize , one can use deterministic and derivative free optimizers like DIRECT . The problem is that DIRECT only optimizes Lipschitz continuous functions; however, is not necessarily Lipschitz continuous. In our implementation, we take a large number of points inside and evaluate at those points and pick the maximum. This method might be slower than DIRECT, but avoids inaccurate results of DIRECT especially when describes a small region.

### 3.2 Exploitation

In the exploitation phase of the algorithm, we would like to use the information gained in the exploration phase to find the optimal point of . Suppose we have explored the search space with samples and we want to find . In order to exploit, we would like to find points whose sphere is small. The reason is that if is small enough, then by local strong convexity of around , for some constant we have

 κ2∥x−x∗∥22≤M−f(x)≤Lγ.

Following the argument in Section 3.1, we estimate

by its mean . By Hoeffding inequality, for all , we have

 P[rx>|M−μx|L+ϵ] ≤exp(−2ϵ2L2σ2x).

Similarly, replacing with , the above inequality entails that with high probability ( 99%), . Hence, a “good” algorithm for exploitation should try to find the point that minimizes the upper bound on . This choice of introduces the expected closest point to . We present the pseudo code of this method in Algorithm 2.

The optimization in Algorithm 2 is nothing but minimizing

 h(x)=∣∣M−μx|O∣∣+1.5σx|OL.

To optimize , again we take a large number of points in (the current unexplored space) uniformly at random and evaluate on those and pick the minimum.

The main algorithm consists of an initial exploration phase followed by exploitation. Notice that we are using GP as an estimate of the unknown function and our method, like EI, highly relies on the quality of this estimation. On a high level, if the function is very complex, i.e., has large Lipschitz constant , then we need more exploration to fit better with GP. Small values of correspond to flatter functions that are easier to optimize. Thus, in general, we expect the number of exploration steps to scale up with . As a rule of thumb, functions we normally deal with satisfy , for which we spend 20% of our budget in exploration and the rest in exploitation.

We use different kernel widths for the exploration and exploitation phases. In the case of exploration for complex functions, if we have enough budget (and hence, enough explorative samples), the kernel width can be set to a small value to fit a better local GP model. However, if we do not have enough budget, we need to take the kernel width to be large. In the case of exploitation, we pick the kernel width under which EI achieves its best performance.

Note that the choice of and plays a crucial role in this algorithm. If we pick larger than the true Lipschitz function, then the radius of our spheres shrink and hence we might need more budget to achieve a certain performance. Choosing smaller than the true Lipschitz is dangerous since it makes the spheres large and increases the chance of including the optimal point in a sphere and hence removing it. Thus, it is better to choose slightly larger than our estimate of the true Lipschitz to be on the safe side.

The method is less sensitive to the choice of , since the derivative of the radius with respect to is proportional to . Thus, as long as we do not over estimate significantly, the factor prevents the spheres to become very large (and include/remove the optimal point). Small values of , make the spheres smaller and hence, if we underestimate , we would need more budget to achieve certain performance. However, if is significantly (proportional to ) smaller than the true maximum of the function, then the algorithm will look for the point that achieves and hence will perform poorly.

## 4 Experimental Results Figure 2: The contour plots for the four 2−dimension proposed benchmarks.

In this section, we compare our algorithm with EI under different scenarios for different functions. We consider six well-known synthetic benchmark functions:

• Cosines  and Rosenbrock  over

• Hartman(3,6)  over

• Shekel  over

• Michalewicz  over

The mathematical expression of these functions are shown in Table 1. Moreover, we use two benchmarks derived from real-world applications:

• Hydrogen  over

• Fuel Cell  over

The contour plots of these two benchmarks along with the Cosines and Rosenbrock benchmarks are shown in Fig 2. The Fuel Cell benchmark is based on optimizing electricity output of microbial fuel cell by modifying some nano structure properties of the anodes. In particular, the inputs that we try to adjust are the average area and average circularity of the nano tube and the output that we try to maximize is the power output of the fuel cell. We fit a regression model on a set of observed samples to simulate the underlying function for evaluation. The Hydrogen benchmark is based on maximizing the Hydrogen production of a particular bacteria by varying the PH and Nitrogen levels of its growth medium. A GP is fitted to a set of observed samples to simulate the underlying function . We consider a Lipschitz constant for all of the benchmarks, except for Cosines and Michalewicz with and Rosenbrock with . For the sake of comparison, we consider the normalized versions of all these functions and hence in all cases. As mentioned previously, we spend 20% of the budget on exploration and 80% on exploitation.

### 4.1 Comparison to EI

In the first set of experiments, we would like to compare our algorithm with the best possible performance of EI. For each benchmark, we search over different values of the kernel width and find the one that optimizes EI’s performance. Fig. 1 is plotted using these optimal kernel widths and shows that the best performance of EI happens when we take only one random sample from a given budget. This performance is then used as the baseline for comparison in Table 2. In addition to EI, we introduced a new version of EI, called EI. Instead of taking the expectation of improvement from to infinity, (equation 2), we calculate the expectation of improvement from to assuming is given. This simple change decreases the level of exploration of EI and changes its behavior to be more exploitative than explorative. Using GP as our posterior model, the following lemma represents the EI. The proof is in supplementary document.

###### Lemma 4.1.

Let and , then

 EIM(x) =E[(f(x)−ymax)I{0≤f(x)−ymax≤M−ymax}] (3) =σ(x)(−u1Φ(u2)+u1Φ(u1)+ϕ(u1)).

In light of the results of Fig. 1, we are also interested in whether our exploration algorithm can be used to improve the performance of EI. To this end, we replace the proposed exploitation algorithm with EI to examine if our exploration strategy helps EI. We refer to this setting as NBRS+EI.

Table 2 summarizes the mean and variance of the performance, measured as the “Regret”, for different benchmarks estimated over random runs. Interestingly, EI can consistently outperform the EI in all benchmarks. This shows that decreasing the exploration rate of EI could degrade the performance.

It is easy to see that in all benchmarks, our algorithm (NBRS+NBIS) outperforms EI consistently except for the Shekel benchmark where EI and NBRS+EI have slightly better performances. We suspect this is due to the fact that we have not optimized our kernel widths, where as the EI kernel width is optimized.

We also note that NBRS+EI does not lead to any consistent improvement over EI. This is possibly due to the fact that EI does not take advantage of the reduced search space produced by NBRS during selection. Figure 3: Plot of regret versus the number of explorations for NBIS algorithm. For a fixed budget nb, we run a number of experiments as follows: first we consider the case where there are 1 explorative sample (either random or NBRS) followed by nb−1 EI samples, next we consider the case where where there are 2 explorative samples followed by nb−2 EI samples and so on. For 2D and 3D functions, we let nb=15 and for high-dimensional functions, we let nb=35. This result shows that in most cases, our exploration is a) better than random, and b) necessary, since the regret achieves its minimum somewhere apart from zero. On average, we need to explore 20% of our budget, however, this portion can be optimized if we consider any specific function. The error bar here is the variance of the regret over different runs. This shows that our regret variance is smaller.

### 4.2 Exploration Analysis

In the second set of experiments, we would like to compare our exploration algorithm NBRS with random exploration when using NBIS for exploitation. As discussed previously, both random exploration and NBRS fail to produce better performance when used with EI. Thus, it is interesting to see whether they can help NBIS in terms of the overall regret, and if so which one is more effective. Figure 3 summarizes this result for all benchmarks. For a fixed budget , we start with explorative sample (either using NBRS or random) followed by NBIS samples; next, we start with explorative samples followed by NBIS samples and so on. In each case, we average the regret over runs. The black line corresponds to the NBRS exploration and the green line corresponds to the random exploration. We will discuss each function in more details later, but in general, this result shows that our exploration algorithm is a) better than random exploration and b) necessary. To see why it is necessary, notice that the minimum regret on all curves is achieved for a non-zero number of NBRS samples. This means unlike EI, our exploitation algorithm benefits from NBRS.

Looking closer into the results, we see that NBRS always lead to a smaller regret comparing to the random exploration. On the Shekel benchmark, we see that random exploration has better performance if we spend majority of the budget to explore. However, for a reasonable amount of exploration that leads to the minimum regret (5 to 10 experiments), random exploration and NBRS achieve similar performance.

On our -dimensional benchmark Hartman(6), we notice that random exploration and NBRS behave very similarly. This shows that the input space is so large that no matter how clever you explore, you will not likely to improve the performance for the limited budget of .

NBRS starts from an initial point and explores the input space step by step. Imagine you are in a dark room with a torch in your hand and you want to explore the room. You start from an initial point and little by little walk through the space until you explore the whole space. This is exactly how NBRS does the exploration. Roughly speaking, NBRS minimizes

and hence, if a point is far from previous observations, i.e., is large, it is unlikely to be chosen. We see this effect in all functions, but most clearly in the Michalewicz benchmark. When the number of explorative samples is smaller than , the step-by-step explore procedure cannot explore the whole space and the exploitation can be trapped in local minima. For explorative samples, NBRS can walk through the entire space fairly well and hence we get a minimum regret. For more than explorative samples, since the space is well explored, we are wasting the samples that could be potentially used to improve our exploitation and hence, the performance becomes worse.

Finally, this investigation suggests that the result in Table 2 can be further improved by taking different number of explorative samples for different functions. To minimize parameter tuning, we chose to explore 20% of our budget. In general, this ratio can be adjusted according to the property of the function (e.g., the Lipschitz constant).

## 5 Conclusion

In this paper, we consider the problem of maximizing an unknown costly-to-evaluate function when we have a small evaluation budget. Using the Bayesian optimization framework, we proposed a two-phase exploration-exploitation algorithm that finds the maximizer of the function with few function evaluations by leveraging the Lipschitz property of the unknown function. In the exploration phase, our algorithm tries to remove as many points as possible from the search space and hence shrinks the search space. In the exploitation phase, the algorithm tries to find the point that is closest to the optimal. Our empirical results show that our algorithm outperforms EI (even in its best condition).

## References

• Anderson et al.  Brigham S. Anderson, Andrew Moore, and David Cohn. A nonparametric approach to noisy and costly optimization. In ICML, 2000.
• Azimi et al. [2010a] Javad Azimi, Alan Fern, and Xiaoli Fern. Batch bayesian optimization via simulation matching. In NIPS, 2010a.
• Azimi et al. [2010b] Javad Azimi, Xiaoli Fern, Alan Fern, Elizabeth Burrows, Frank Chaplen, Yanzhen Fan, Hong Liu, Jun Jaio, and Rebecca Schaller. Myopic policies for budgeted optimization with constrained experiments. In AAAI, 2010b.
• Brochu et al.  Eric Brochu, Mike Cora, and Nando de Freitas.

A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning.

Technical Report TR-2009-23, Department of Computer Science, University of British Columbia, 2009.
• Brunato et al.  Mauro Brunato, Roberto Battiti, and Srinivas Pasupuleti. A memory-based rash optimizer. In

AAAI-06 Workshop on Heuristic Search, Memory Based Heuristics and Their applications

, 2006.
• Burrows et al.  Elizabeth H. Burrows, Weng-Keen Wong, Xiaoli Fern, Frank W.R. Chaplen, and Roger L. Ely.

Optimization of ph and nitrogen for enhanced hydrogen production by synechocystis sp. pcc 6803 via statistical and machine learning methods.

Biotechnology Progress, 25:1009–1017, 2009.
• Dixon and Szeg  L.C.W. Dixon and G.P. Szeg. The Global Optimization Problem: An Introduction Toward Global Optimization. North-Holland, Amsterdam, 1978.
• Elder  IV Elder, J.F. Global rd optimization when probes are expensive: the grope algorithm. In IEEE International Conference on Systems, Man and Cybernetics, pages 577–582, 1992.
• Jones et al.  D. R. Jones, C. D. Perttunen, and B. E. Stuckman. Lipschitzian optimization without the lipschitz constant. Journal of Optimization Theory and Applications, 79(1):157–181, 1993.
• Jones  Donald R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21:345–383, 2001.
• Li et al.  Wei Li, Xuerui Wang, Ruofei Zhang, Ying Cui, Jianchang Mao, and Rong Jin. Exploitation and exploration in a performance based contextual advertising system. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’10, pages 27–36. ACM, 2010.
• Lizotte  D. Lizotte. Practical Bayesian Optimization. PhD thesis, University of Alberta, Edmonton, Alberta, Canada, 2008.
• Locatelli  M. Locatelli. Bayesian algorithms for one-dimensional global optimization. Journal of Global Optimization, 10(1):57–76, 1997. ISSN 0925-5001.
• Michalewicz  Zbigniew Michalewicz. Genetic algorithms + data structures = evolution programs (2nd, extended ed.). Springer-Verlag New York, Inc., New York, NY, USA, 1994. ISBN 3-540-58090-5.
• Moore et al.  Andrew Moore, Jeff Schneider, Justin Boyan, and Mary Soon Lee.

Q2: Memory-based active learning for optimizing noisy continuous functions.

In ICML, pages 386–394, 1998.
• Rasmussen and Williams  Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT, 2006.
• Sasena  M. J. Sasena. Flexibility and Efficiency Enhancement for Constrained Global Design Optimization with Kriging Approximations. PhD thesis, University of Michigan, Michigan, MI, 2002.
• Schonlau  M. Schonlau. Computer Experiments and Global Optimization. PhD thesis, University of Waterloo, Waterloo, Ontario, Canada, 1997.
• Stuckman  B. E Stuckman. A global search method for optimizing nonlinear systems. In IEEE transactions on systems, man, and cybernetic, volume 18, pages 965–977, 1988.
• Vazquez and Bect  E. Vazquez and J. Bect. Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. Journal of Statistical Planning and Inference, 140(11):3088–3095, 2010.

## Appendix A Proof of Lemma 1

Let be our function prediction at any point distributed as a normal random variable with mean and variance ; i.e where and obtained from Gaussian process. Suppose is the best current observation, the probability of improvement of can be calculated as :

 p(f(x)=ymax+I)=1√2πσxexp(−(ymax+I−μx)22σ2x). (4)

Therefore we define as is simply the expectation of likelihood over at any given point :

 EIM(x) (5) =1√2πσxexp(−(ymax−μx)22σ2x)∫M−ymax0Iexp(−2I(ymax−μx)+I22σ2x)dI.

Let define

 T =exp(−2I(ymax−μx)+I22σ2x) (6) ∂T∂I =−1σ2x(IT+(ymax−μxT)),

therefore we can get

 IT=−(ymax−μx)T−∂T∂Iσ2x. (7)

Using equations 7,6,5 we can get

 EIM(x) =1√2πσxexp(−(ymax−μx)22σ2x)∫M−ymax0ITdI (8) =σxϕ(ymax−μxσx) −(ymax−μx)∫M−ymax01√2πσxexp(−12(ymax+I−μxσx)2)dI.

Let

 I∗=ymax+I−μxσx,thendI∗=dIσx, (9)

then the equation 8 can be written as

 EIM(x) =σxϕ(ymax−μxσx) (10) −(ymax−μx)∫M−μxσxymax−μxσx1√2πexp(−12I∗2)dI∗ =σxϕ(ymax−μxσx)−[(ymax−μx)(Φ(M−μxσx)−Φ(ymax−μxσx))].

Let

 u1=ymax−μxσx,u2=M−μxσx,

then we can finally drive the maximum expected improvement at any given point as

 MEI(x)=σx(−u1Φ(u2)+u1Φ(u1)+ϕ(u1)), (11)

where

is the normal cumulative distribution function and

is the standard nomal distribution.