Optimisation problems arise in numerous fields ranging from science and engineering to economics and management (Brochu et al., 2010). In classical optimisation tasks, the objective function is usually known and cheap to evaluate (Hennig and Schuler, 2012). However, in many situations, we face another type of tasks for which the above assumptions do not apply. For example, in the cases of clinical trials, financial investments or constructing a sensor network, it is very costly to draw a sample from the latent function underlying the real-world processes (Brochu et al., 2010). The objective functions in such type of problems are generally non-convex and their closed-form expressions and derivatives are unknown (Shahriari et al., 2016). Bayesian optimisation is a powerful tool to tackle such optimisation challenges (Brochu et al., 2010).
A core step in Bayesian optimisation is to define an acquisition function which uses the available observations effectively to recommend the next query location (Shahriari et al., 2016)
. There are many types of acquisition functions such as Probability of Improvement (PI)(Kushner, 1964), Expected Improvement (EI) (Močkus et al., 1978; Jones et al., 1998) and Gaussian Process Upper Confidence Bound (GP-UCB) (Srinivas et al., 2009). The most recent type is based on information theory and offers a new perspective to efficiently select the sequence of sampling locations based on entropy of the distribution over the unknown minimiser (Shahriari et al., 2016). The information-theoretic approaches guide our evaluations to locations where we can maximise our learning about the unknown minimum rather than to locations where we expect to obtain lower function values (Hennig and Schuler, 2012). Such methods have demonstrated impressive empirical performance and tend to outperform traditional methods in tasks with highly multimodal and noisy latent functions.
One popular information-based acquisition function is Predictive Entropy Search (PES) (Villemonteix et al., 2009; Hennig and Schuler, 2012; Hernández-Lobato et al., 2014) . However, it is very slow to evaluate in comparison with traditional methods like EI, PI and GP-UCB and faces serious constraints in its application. For example, the implementation of PES requires the first and second partial derivatives as well as the spectral density of the Gaussian process kernel function (Hernández-Lobato et al., 2014; Requeima, 2016). This limits our kernel choices. Moreover, PES deals with the input space, thus less efficient in higher dimensional problems (Wang and Jegelka, 2017). The more recent methods such as Output-space Predictive Entropy Search (OPES) (Hoffman and Ghahramani, 2015) and Max-value Entropy Search (MES) (Wang and Jegelka, 2017)
improve on PES by focusing on the information content in output space instead of input space. However, current entropy search methods, whether dealing with the minimiser or the minimum value, all involve two separate sampling processes : 1) sampling hyperparameters for marginalisation and 2) sampling the global minimum/minimiser for entropy computation. The second sampling process not only contributes significantly to the computational burden of these information-based acquisition functions but also requires the construction of a good approximation for the objective function based on Bochner’s theorem(Hernández-Lobato et al., 2014), which limits the kernel choices to the stationary ones (Bochner, 1959).
In view of the limitations of the existing methods, we propose a fast information-theoretic Bayesian optimisation technique (FITBO). Inspired by the Bayesian integration work in (Gunter et al., 2014), the creative contribution of our technique is to approximate any black-box function in a parabolic form: . The global minimum is explicitly represented by a hyperparameter , which can be sampled together with other hyperparameters. As a result, our approach has the following three major advantages:
Our approach reduces the expensive process of sampling the global minimum/minimiser to the much more efficient process of sampling one additional hyperparameter, thus overcoming the speed bottleneck of information-theoretic approaches.
Our approach faces fewer constraints on the choice of appropriate kernel functions for the Gaussian process prior.
Similar to MES (Wang and Jegelka, 2017), our approach works on information in the output space and thus is more efficient in high dimensional problems.
2 Fast Information-theoretic Bayesian Optimisation
Information-theoretic techniques aim to reduce the uncertainty about the unknown global minimiser by selecting a query point that leads to the largest reduction in entropy of the distribution (Hennig and Schuler, 2012). The acquisition function for such techniques has the form (Hennig and Schuler, 2012; Hernández-Lobato et al., 2014):
PES makes use of the symmetry of mutual information and arrives at the following equivalent acquisition function:
where is the predictive posterior distribution for conditioned on the observed data , the test location and the global minimiser of the objective function.
FITBO harnesses the same information-theoretic thinking but measures the entropy about the latent global minimum instead of that of the global minimiser . Thus, the acquisition function of FITBO method is the mutual information between the function minimum and the next query point (Wang and Jegelka, 2017). In other words, FITBO aims to select the next query point which minimises the entropy of the global minimum:
This idea of changing entropy computation from the input space to the output space is also shared by Hoffman and Ghahramani (2015) and Wang and Jegelka (2017). Hence, the acquisition function of the FITBO method is very similar to those of OPES (Hoffman and Ghahramani, 2015) and MES (Wang and Jegelka, 2017).
However, our novel contribution is to express the unknown objective function in a parabolic form , thus representing the global minimum by a hyperparameter and circumventing the laborious process of sampling the global minimum. FITBO acquisition function can then be reformulated as:
The intractable integral terms can be approximated by drawing samples of from the posterior distribution and using a Monte Carlo method (Hernández-Lobato et al., 2014). The predictive posterior distribution can be turned into a neat Gaussian form by applying a local linearisation technique on our parabolic transformation as described in Section 2.1. Thus, the first term in the above FITBO acquisition function is an entropy of a Gaussian mixture, which is intractable and demands approximation as described in Section 2.3
. The second term is the expected entropy of a one-dimensional Gaussian distribution and can be computed analytically because the entropy of a Gaussian has the closed form:
where the varianceand is the variance of observation noise.
2.1 Parabolic Transformation and Predictive Posterior Distribution
Gunter et al. (2014) use a square-root transformation on the integrand in their warped sequential active Bayesian integration method to ensure non-negativity. Inspired by this work, we creatively express any unknown objective function in the parabolic form:
where is the global minimum of the objective function. Given the noise-free observation data , the observation data on is where .
We impose a zero-mean Gaussian process prior on , , so that the posterior distribution for conditioned on the observation data and the test point also follows a Gaussian process:
The parabolic transformation causes the distribution for any to become a non-central process, making the analysis intractable. In order to tackle this problem and obtain a posterior distribution that is also Gaussian, we employ a linearisation technique (Gunter et al., 2014).
We perform a local linearisation of the parabolic transformation around and obtain where the gradient . By setting to the mode of the posterior distribution (i.e. ), we obtain an expression for which is linear in :
Since the affine transformation of a Gaussian process remains Gaussian, the predictive posterior distribution for now has a closed form:
However, in real world situations, we do not have access to the true function values but only noisy observations of the function, , where is assumed to be an independently and identically distributed Gaussian noise with variance (Rasmussen and Williams, 2006). Given noisy observation data , the predictive posterior distribution (8) becomes:
2.2 Hyperparameter Treatment
Hyperparameters are the free parameters, such as output scale and characteristic length scales in the kernel function for the Gaussian processes as well as noise variance. We use
to represent a vector of hyperparameters that includes all the kernel parameters and the noise variance. Recall that we introduce a new hyperparameterin our model to represent the global minimum. To ensure that is not greater than the minimum observation , we assume that
follows a broad normal distribution. Thus the prior forhas the form:
The most popular approach to hyperparameter treatment is to learn hyperparameter values via maximum likelihood estimation (MLE) or maximum a posterior estimation (MAP). However, both MLE and MAP are not desirable as they give point estimates and ignore our uncertainty about the hyperparameters(Hernández-Lobato et al., 2014). In a fully Bayesian treatment of the hyperparameters, we should consider all possible hyperparameter values. This can be done by marginalising the terms in the acquisition function with respect to the posterior where :
2.3 Approximation for the Gaussian Mixture Entropy
The entropy of a Gaussian mixture is intractable and can be estimated via a number of methods: the Taylor expansion proposed in (Huber et al., 2008), numerical integration and Monte Carlo integration. Of these three, our experimentation revealed that numerical integration (in particular, an adaptive Simpson’s method) was clearly the most performant for our application (see the supplementary material). Note that our Gaussian mixture is univariate.
By fitting a Gaussian to the Gaussian mixture, we can obtain a closed-form upper bound for the first entropy term: , thus further enhancing the computational speed of FITBO approach. However, the moment-matching approach results in a looser approximation than numerical integration (shown in the supplementary material) and we will compare both approaches in our experiments in Section 3.
2.4 The Algorithm
The procedures of computing the acquisition function of FITBO are summarised by Algorithm 1. Figure 1 illustrates the sampling behaviour of FITBO method for a simple 1D Bayesian optimisation problem. The optimisation process is started with 3 initial observation data. As more samples are taken, the mean of the posterior distribution for the objective function gradually resembles the objective function and the distribution of converges to the global minimum.
We conduct a series of experiments to test the empirical performance of FITBO and compare it with other popular acquisition functions. In this section, FITBO denotes the version using numerical integration to estimate the entropy of the Gaussian mixture while FITBO-MM denotes the version using moment matching. In all experiments, we adopt a zero-mean Gaussian process prior with the squared exponential kernel function and use the elliptical slice sampler (Murray et al., 2010) for sampling hyperparameters and . For the implementation of EI, PI, GP-UCB, MES and PES, we use the open source Matlab code by Wang and Jegelka (2017) and Hernández-Lobato et al. (2014). Our Matlab code for FITBO will be available at https://github.com/rubinxin/FITBO. We use the type of MES method that samples the global minimum from an approximated posterior function where is an -dimensional feature vector and is a Gaussian weight vector (Wang and Jegelka, 2017). This is also the minimiser sampling strategy adopted by PES (Hernández-Lobato et al., 2014). The computational complexity of sampling from its posterior distribution is when (Hernández-Lobato et al., 2014). Minimising to within accuracy using any grid search or branch and bound optimiser requires calls to for -dimensional input data (Kandasamy et al., 2015). For both PES and MES, we apply their fastest versions which draw only 1 minimum or minimiser sample to estimate the acquisition function.
3.1 Runtime Tests
The first set of experiments measure and compare the runtime of evaluating the acquisition functions for methods including GP-UCB, PI, EI, PES, MES, FITBO and FITBO-MM. All the timing tests were performed exclusively on a 2.3 GHz Intel Core i5. The runtime measured excludes the time taken for sampling hyperparameters as well as optimising the acquisition functions. The methodology of the tests can be summarised as follows:
Generate 10 initial observation data from a -dimensional test function and sample a set of hyperparameters from the log posterior distribution using the elliptical slice sampler.
Use this set of hyperparameters to evaluate all acquisition functions at 100 test points.
Repeat the procedures 1 and 2 for 100 different initialisations and compute the mean and standard deviation of the runtime taken for evaluating various acquisition functions.
We did not include the time for sampling alone into the runtime of evaluating FITBO and FITBO-MM because is sampled jointly with other hyperparameters and does not add to the overall sampling burden significantly. In fact, we have tested that sampling by the elliptical slice sampler adds 0.09 seconds on average when drawing 2 000 samples and 0.93 seconds when drawing 20 000 samples. Note further that we will limit all methods to a fixed number of hyperparameter samples in both runtime tests and performance experiments: this will impart a slight performance penalty to our method, which must sample from a hyperparameter space of one additional dimension.
|( 0.006)||( 0.005)||( 0.005)|
|( 0.011)||( 0.010)||( 0.009)|
|( 0.025)||( 0.012)||( 0.027)|
|( 0.026)||( 0.026)||( 0.028)|
|( 0.011)||( 0.014)||( 0.010)|
|( 0.047)||( 0.034)||( 0.021)|
|( 0.011)||( 0.010)||( 0.010)|
|( 0.019)||( 0.023)||( 0.019)|
|(0.011)||( 0.010)||( 0.013)|
|( 0.029)||( 0.019)||( 0.017)|
The above tests are repeated for different hyperparameter sample sizes and input data of different dimensions . The results are presented graphically in Figure 2 with the evaluation runtime being expressed in the logarithm to the base 10 and the exact numerical results for methods that are very close in runtime are presented in Tables 1 and 2.
Figure 2 shows that FITBO is significantly faster to evaluate than PES and MES for various hyperparameter sample sizes used and for problems of different input dimensions. Moreover, FITBO even gains a clear speed advantage over EI. The moment matching technique manages to further enhance the speed of FITBO, making FITBO-MM comparable with, if not slightly faster than, simple algorithms like PI and GP-UCB. In addition, we notice that the runtime of evaluating FITBO-MM, EI, PI and GP-UCB tend to remain constant regardless of the input dimensions while the runtime for PES and MES tends to increase with input dimensions at a rate of . Thus, our approach is more efficient and applicable in dealing with high-dimensional problems.
3.2 Tests with Benchmark Functions
We perform optimisation tasks on three challenging benchmark functions: Branin (defined in ), Eggholder (defined in ) and Hartmann (defined in ). In all tests, we set the observation noise to and resample all the hyperparameters after each function evaluation. In evaluating the optimisation performance of various Bayesian optimisation methods, we use the two common metrics adopted by Hennig and Schuler (2012). The first metric is Immediate regret (IR) which is defined as:
where is the location of true global minimum and is the best guess recommended by a Bayesian optimiser after iterations, which corresponds to the minimiser of the posterior mean. The second metric is the Euclidean distance of an optimiser’s recommendation from the true global minimiser , which is defined as:
We compute the median IR and the median over 40 random initialisations. At each initialisation, all Bayesian optimisation algorithms start from 3 random observation data for Branin-2D and Eggholder-2D problems and from 9 random observation data for Hartmann-6D problem.
The results are presented in Figure 3. The plots on the left show the median IR achieved by each approach as more evaluation steps are taken. The plots on the right show the median between each optimiser’s recommended global minimiser and the true global minimiser. The error bars indicate one standard deviation.
In the case of Branin-2D, FITBO and FITBO-MM lose out to other methods initially but surpass other methods after 50 evaluations. One interesting point we would like to illustrate through the Branin problem is the fundamentally different mechanisms behind information-based approaches like FITBO and improvement-based approaches like EI. As shown in Figure 4, FITBO is much more explorative compared to EI in taking new evaluations because FITBO selects the query points that maximise the information gain about the minimiser instead of those that lead to an improvement over the best function value observed. FITBO successfully finds all three global minimisers but EI quickly concentrates its searches into regions of low function values, missing out one of the global minimisers.
In the case of Eggholder-2D which is more complicated and multimodal, FITBO and FITBO-MM perform not as well as other methods in finding lower function values but outperform all competitors in locating the global minimiser by a large margin. One reason is that the function value near the global minimiser of Eggholder-2D rises sharply. Thus, although FITBO and FITBO-MM are able to better identify the location of the true global minimum, they return higher function values than other methods that are trapped in locations of good local minima.
As for a higher dimensional problem, Hartmann-6D, FITBO and FITBO-MM outperform all other methods in finding both the lower function value and the location of the global minimum. In all three tasks, FITBO-MM, despite using a looser upper bound of the Gaussian mixture entropy, still manages to demonstrate similar, sometimes better, results compared with FITBO. This shows that the performance of our information-theoretic approach is robust to slightly worse approximation of the Gaussian mixture entropy.
3.3 Test with Real-world Problems
Finally, we experiment with a series of real-world optimisation problems. The first problem (Boston) returns the L2 validation loss of a 1-hidden layer neural network(Wang and Jegelka, 2017) on the Boston housing dataset (Bache and Lichman, 2013)
. The dataset is randomly partitioned into train/validation/test sets and the neural network is trained with Levenberg-Marquardt optimisation. The 2 variables tuned with Bayesian optimisation are the number of neurons and the damping factor.
1998). The SVM classifier adopts a radial basis kernel and the 2 variables to optimise are the kernel scale parameter and the box constraint.
The third problem (Cancer) returns the cross-entropy loss of a 1-hidden layer neural network (Wang and Jegelka, 2017) on the validation set of the breast cancer dataset (Bache and Lichman, 2013). This neural network is trained with the scaled conjugate gradient method and we use Bayesian optimisation methods to tune the number of neurons, the damping factor , the increase factor and the decrease factor.
We initialise all Bayesian optimisation algorithms with 3 random observation data and set the observation noise to
. All experiments are repeated 40 times. In each case, the ground truth is unknown but our aim is to minimise the validation loss. Thus, the corresponding loss functions are used to compare the performance of various Bayesian optimisation algorithms.
Figure 5 shows the median of the best validation losses achieved by all Bayesian optimisation algorithms after iterations for the Boston and MNIST-SVM problems. Our FITBO and FITBO-MM perform competitively well compared to their information-theoretic counterparts and all information-theoretic methods outperform EI in these real-world applications.
As for the Cancer problem (Figure 6), FITBO and FITBO-MM converge to the stable median value of the validation loss at a much faster speed than MES and EI and are almost on par with PES. By examining the mean validation loss shown in the right plot of Figure 6, it is evident that both FITBO and FITBO-MM demonstrate better performance than all other methods on average with FITBO gaining a slight advantage over FITBO-MM. Moreover, the comparable performance of FITBO and FITBO-MM in all three real-world tasks re-affirmed the robustness of our approach to entropy approximation as our moment matching technique, while improving the speed of the algorithm, does not really compromise the performance.
We have proposed a novel information-theoretic approach for Bayesian optimisation, FITBO. With the creative use of the parabolic transformation and the hyperparameter , FITBO enjoys the merits of less sampling effort, more flexible kernel choices and much simpler implementation in comparison with other information-based methods like PES and MES. As a result, its computational speed outperforms current information-based methods by a large margin and even exceeds EI to be on par with PI and GP-UCB. While requiring much lower runtime, it still manages to achieve satisfactory optimisation performance which is as good as or even better than PES and MES in a variety of tasks. Therefore, FITBO approach offers a very efficient and competitive alternative to existing Bayesian optimisation approaches.
We wish to thank Roman Garnett and Tom Gunter for the insightful discussions and Zi Wang for sharing the Matlab implementation of EI, PI, GP-UCB, MES and PES. We would also like to thank Favour Mandanji Nyikosa, Logan Graham, Arno Blaas and Olga Isupova for their helpful comments about improving the paper.
- Bache and Lichman  K. Bache and M. Lichman. UCI machine learning repository. 2013.
- Bochner  S. Bochner. Lectures on Fourier Integrals: With an Author’s Suppl. on Monotonic Functions, Stieltjes Integrals and Harmonic Analysis. Transl. from the Orig. by Morris Tennenbaum and Harry Pollard. University Press, 1959.
- Brochu et al.  E. Brochu, V. M. Cora, and N. De Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010.
- Gunter et al.  T. Gunter, M. A. Osborne, R. Garnett, P. Hennig, and S. J. Roberts. Sampling for inference in probabilisktic models with fast Bayesian quadrature. In Advances in neural information processing systems, pages 2789–2797, 2014.
- Hennig and Schuler  P. Hennig and C. J. Schuler. Entropy search for information-efficient global optimization. Journal of Machine Learning Research, 13(Jun):1809–1837, 2012.
- Hernández-Lobato et al.  J. M. Hernández-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive entropy search for efficient global optimization of black-box functions. In Advances in neural information processing systems, pages 918–926, 2014.
- Hoffman and Ghahramani  M. W. Hoffman and Z. Ghahramani. Output-space predictive entropy search for flexible global optimization. In the NIPS workshop on Bayesian optimization, 2015.
- Huber et al.  M. F. Huber, T. Bailey, H. Durrant-Whyte, and U. D. Hanebeck. On entropy approximation for Gaussian mixture random vectors. In Multisensor Fusion and Integration for Intelligent Systems, 2008. MFI 2008. IEEE International Conference on, pages 181–188. IEEE, 2008.
- Jones et al.  D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492, 1998.
- Kandasamy et al.  K. Kandasamy, J. Schneider, and B. Póczos. High dimensional Bayesian optimisation and bandits via additive models. In International Conference on Machine Learning, pages 295–304, 2015.
- Kushner  H. J. Kushner. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. Journal of Basic Engineering, 86(1):97–106, 1964.
- LeCun et al.  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Močkus et al.  J. Močkus, V. Tiesis, and A. Žilinskas. Toward global optimization, volume 2, chapter the application of Bayesian methods for seeking the extremum, 1978.
- Murray et al.  I. Murray, R. Prescott Adams, and D. J. MacKay. Elliptical slice sampling. 2010.
- Rasmussen and Williams  C. E. Rasmussen and C. K. Williams. Gaussian processes for machine learning, volume 1. MIT press Cambridge, 2006.
- Requeima  J. R. Requeima. Integrated predictive entropy search for Bayesian optimization. 2016.
- Shahriari et al.  B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2016.
- Snoek et al.  J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959, 2012.
- Srinivas et al.  N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995, 2009.
- Villemonteix et al.  J. Villemonteix, E. Vazquez, and E. Walter. An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization, 44(4):509–534, 2009. URL http://www.springerlink.com/index/T670U067V47922VK.pdf.
- Wang and Jegelka  Z. Wang and S. Jegelka. Max-value entropy search for efficient Bayesian optimization. arXiv preprint arXiv:1703.01968, 2017.