Batched High-dimensional Bayesian Optimization via Structural Kernel Learning

03/06/2017 ∙ by Zi Wang, et al. ∙ 0

Optimization of high-dimensional black-box functions is an extremely challenging problem. While Bayesian optimization has emerged as a popular approach for optimizing black-box functions, its applicability has been limited to low-dimensional problems due to its computational and statistical challenges arising from high-dimensional settings. In this paper, we propose to tackle these challenges by (1) assuming a latent additive structure in the function and inferring it properly for more efficient and effective BO, and (2) performing multiple evaluations in parallel to reduce the number of iterations required by the method. Our novel approach learns the latent structure with Gibbs sampling and constructs batched queries using determinantal point processes. Experimental validations on both synthetic and real-world functions demonstrate that the proposed method significantly outperforms existing state-of-the-art approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Optimization is one of the fundamental pillars of modern machine learning. Considering that most modern machine learning methods involve the solution of some optimization problem, it is not surprising that many recent breakthroughs in this area have been on the back of more effective techniques for optimization. A case in point is deep learning, whose rise has been mirrored by the development of numerous techniques like batch normalization.

While modern algorithms have been shown to be very effective for convex optimization problems defined over continuous domains, the same cannot be stated for non-convex optimization, which has generally been dominated by stochastic techniques. During the last decade, Bayesian optimization has emerged as a popular approach for optimizing black-box functions. However, its applicability is limited to low-dimensional problems because of computational and statistical challenges that arise from optimization in high-dimensional settings.

In the past, these two problems have been addressed by assuming a simpler underlying structure of the black-box function. For instance, Djolonga et al. (2013) assume that the function being optimized has a low-dimensional effective subspace, and learn this subspace via low-rank matrix recovery. Similarly, Kandasamy et al. (2015) assume additive structure of the function where different constituent functions operate on disjoint low-dimensional subspaces. The subspace decomposition can be partially optimized by searching possible decompositions and choosing the one with the highest GP marginal likelihood (treating the decomposition as a hyper-parameter of the GP). Fully optimizing the decomposition is, however, intractable. Li et al. (2016) extended (Kandasamy et al., 2015) to functions with a projected-additive structure, and approximate the projective matrix via projection pursuit with the assumption that the projected subspaces have the same and known dimensions. The aforementioned approaches share the computational challenge of learning the groups of decomposed subspaces without assuming the dimensions of the subspaces are known. Both (Kandasamy et al., 2015) and subsequently (Li et al., 2016)

adapt the decomposition by maximizing the GP marginal likelihood every certain number of iterations. However, such maximization is computationally intractable due to the combinatorial nature of the partitions of the feature space, which forces prior work to adopt randomized search heuristics.

In this paper, we develop a new formulation of Bayesian optimization specialized for high dimensions. One of the key contributions of this work is a new formulation that interprets prior work on high-dimensional Bayesian optimization (HDBO) through the lens of structured kernels, and places a prior on the kernel structure. Thereby, our formulation enables simultaneous learning of the decomposition of the function domain.

Prior work on latent decomposition of the feature space considers the setting where exploration/evaluation is performed once at a time. This approach makes Bayesian optimization time-consuming for problems where a large number of function evaluations need to be made, which is the case for high dimensional problems. To overcome this restriction, we extend our approach to a batched version that allows multiple function evaluations to be performed in parallel (Desautels et al., 2014; González et al., 2016; Kathuria et al., 2016). Our second contribution is an approach to select the batch of evaluations for structured kernel learning-based HDBO.

Other Related Work.

In the past half century, a series of different acquisition functions was developed for sequential BO in relatively low dimensions (Kushner, 1964; Moc̆kus, 1974; Srinivas et al., 2012; Hennig & Schuler, 2012; Hernández-Lobato et al., 2014; Kawaguchi et al., 2015; Wang et al., 2016a; Kawaguchi et al., 2016; Wang & Jegelka, 2017). More recent developments address high dimensional BO by making assumptions on the latent structure of the function to be optimized, such as low-dimensional structure (Wang et al., 2016b; Djolonga et al., 2013) or additive structure of the function (Li et al., 2016; Kandasamy et al., 2015). Duvenaud et al. (2013) explicitly search over kernel structures.

While the aforementioned methods are sequential in nature, the growth of computing power has motivated settings where at once a batch of points is selected for observation (Contal et al., 2013; Desautels et al., 2014; González et al., 2016; Snoek et al., 2012; Wang et al., 2017). For example, the UCB-PE algorithm (Contal et al., 2013)

exploits that the posterior variance of a Gaussian Process is independent of the function mean. It greedily selects points with the highest posterior variance, and is able to update the variances without observations in between selections. Similarly, B-UCB 

(Desautels et al., 2014) greedily chooses points with the highest UCB score computed via the out-dated function mean but up-to-date function variances. However, these methods may be too greedy in their selection, resulting in points that lie far from an optimum. More recently, Kathuria et al. (2016) tries to resolve this issue by sampling the batch via a diversity-promoting distribution for better randomized exploration, while Wang et al. (2017) quantifies the goodness of the batch with a submodular surrogate function that trades off quality and diversity.

2 Background

Let be an unknown function and we aim to optimize it over a compact set . Within as few function evaluations as possible, we want to find

Following (Kandasamy et al., 2015), we assume a latent decomposition of the feature dimensions into disjoint subspaces, namely, and for all , . Further, can be decomposed into the following additive form:

To make the problem tractable, we assume that each is drawn independently from for all . The resulting will also be a sample from a GP: , where the priors are and . Let be the data we observed from , where . The log data likelihood for is

(2.1)

where is the gram matrix associated with , and are the concatenated observed function values. Conditioned on the observations , we can infer the posterior mean and covariance function of the function component to be

where .

We use regret to evaluate the BO algorithms, both in the sequential and the batch selection case. For the sequential selection, let denote the immediate regret at iteration . We are interested in both the averaged cumulative regret and the simple regret for a total number of iterations. For batch evaluations, denotes the immediate regret obtained by the batch at iteration . The averaged cumulative regret of the batch setting is , and the simple regret . We use the averaged cumulative regret in the bandit setting, where each evaluation of the function incurs a cost. If we simply want to optimize the function, we use the simple regret to capture the minimum gap between the best point found and the global optimum of the black-box function . Note that the averaged cumulative regret upper bounds the simple regret.

3 Learning Additive Kernel Structure

We take a Bayesian view on the task of learning the latent structure of the GP kernel. The decomposition of the input space will be learned simultaneously with optimization as more and more data is observed. Our generative model draws mixing proportions . Each dimension is assigned to one out of groups via the decomposition assignment variable . The objective function is then , where is the set of support dimensions for function , and each is drawn from a Gaussian Process. Finally, given an input , we observe . Figure 1 illustrates the corresponding graphical model.

Given the observed data , we obtain a posterior distribution over possible decompositions (and mixing proportions ) that we will include later in the BO process:

Marginalizing over yields the posterior distribution of the decomposition assignment

where is the data likelihood (2.1) for the additive GP given a fixed structure defined by . We learn the posterior distribution for via Gibbs sampling, choose the decomposition among the samples that achieves the highest data likelihood, and then proceed with BO. The Gibbs sampler repeatedly draws coordinate assignments according to

where

and is the gram matrix associated with the observations by setting

. We can use the Gumbel trick to efficiently sample from this categorical distribution. Namely, we sample a vector of i.i.d standard Gumbel variables

of length , and then choose the sampled decomposition assignment .

d

Figure 1: Graphical model for the structured Gaussian process;

is the hyperparameter of the GP kernel;

controls the decomposition for the input space.

With a Dirichlet process, we could make the model nonparametric and the number of possible groups in the decomposition infinite. Given that we have a fixed number of input dimension , we set in practice.

4 Diverse Batch Sampling

In real-world applications where function evaluations translate into time-intensive experiments, the typical sequential exploration strategy – observe one function value, update the model, then select the next observation – is undesirable. Batched Bayesian Optimization (BBO) (Azimi et al., 2010; Contal et al., 2013; Kathuria et al., 2016) instead selects a batch of observations to be made in parallel, then the model is updated with all simultaneously.

Extending this scenario to high dimensions, two questions arise: (1) the acquisition function is expensive to optimize and (2), by itself, does not sufficiently account for exploration. The additive kernel structure improves efficiency for (1). For batch selection (2), we need an efficient strategy that enourages observations that are both informative and non-redundant. Recent work (Contal et al., 2013; Kathuria et al., 2016) selects a point that maximizes the acquisition function, and adds additional batch points via a diversity criterion. In high dimensions, this diverse selection becomes expensive. For example, if each dimension has a finite number of possible values111While we use this discrete categorical domain to illustrate the batch setting, our proposed method is general and is applicable to continuous box-constrained domains., the cost of sampling batch points via a Determinantal Point Process (DPP), as proposed in (Kathuria et al., 2016), grows exponentially with the number of dimensions. The same obstacle arises with the approach by Contal et al. (2013), where points are selected greedily. Thus, naïve adoptions of these approaches in our setting would result in intractable algorithms. Instead, we propose a general approach that explicitly takes advantage of the structured kernel to enable relevant, non-redundant high-dimensional batch selection.

We describe our approach for a single decomposition sampled from the posterior; it extends to a distribution of decompositions by sampling a set of decompositions from the posterior and then sampling points for each decomposition individually. Given a decomposition , we define a separate Determinantal Point Process (DPP) on each group of dimensions. A set of points in the subspace

is sampled with probability proportional to

, where is the posterior covariance matrix of the -th group given observations, and is the submatrix of with rows and columns indexed by . Assuming the group sizes are upper-bounded by some constant, sampling from each such DPP individually implies an exponential speedup compared to using the full kernel.

Sampling vs. Greedy Maximization

The determinant measures diversity, and hence the DPP assigns higher probability to diverse subsets . An alternative to sampling is to directly maximize the determinant. While this is NP-hard, a greedy strategy gives an approximate solution, and is used in (Kathuria et al., 2016), and in (Contal et al., 2013) as Pure Exploration (PE). We too test this strategy in the experiments. In the beginning, if the GP is not approximating the function well, then greedy may perform no better than a stochastic combination of coordinates, as we observe in Fig. 6.

Sample Combination

Now we have chosen a diverse subset of size for each group . We need to combine these subspace points to obtain final batch query points in . A simple way to combine samples from each group is to do it randomly without replacement, i.e., we sample one from each uniformly randomly without replacement, and combine the parts, one for each , to get one sample in . We repeat this procedure until we have points. This retains diversity across the batch of samples, since the samples are diverse within each group of features.

Besides this random combination, we can also combine samples greedily. We define a quality function for each group at time , and combine samples to maximize this quality function. Concretely, for the first point, we combine the maximizers from each group. We remove those used parts, , and repeat the procedure until we have samples. In each iteration, the sample achieving the highest quality score gets selected, while diversity is retained.

Both selection strategies can be combined with a wide range of existing quality and acquisition functions.

Add-UCB-DPP-BBO

We illustrate the above framework with GP-UCB (Srinivas et al., 2012) as both the acquisition and quality functions. The Upper Confidence Bound and Lower Confidence Bound with parameter for group at time are

(4.1)

and combine the expected value of with the uncertainty . We set both the acquisition function and quality function to be for group at time .

To ensure that we select points with high acquisition function values, we follow (Contal et al., 2013; Kathuria et al., 2016) and define a relevance region for each group as

where . We then use as the ground set to sample with PE/DPP. The full algorithm is shown in the appendix.

5 Empirical Results

We empirically evaluate our approach in two parts: First, we verify the effectiveness of using our Gibbs sampling algorithm to learn the additive structure of the unknown function, and then we test our batch BO for high dimensional problems with the Gibbs sampler. Our code is available at https://github.com/zi-w/Structural-Kernel-Learning-for-HDBBO.

5.1 Effectiveness of Decomposition Learning

We first probe the effectiveness of using the Gibbs sampling method described in Section 3 to learn the decomposition of the input space. More details of the experiments including sensitivity analysis for can be found in the appendix.

Figure 2: The simple regrets () and the averaged cumulative regrets () for setting input space decomposition with Known, NP, FP, PL-1, PL-2, and Gibbs on 2, 10, 20, 50 dimensional synthetic additive functions. Gibbs achieved comparable results to Known. Comparing PL-1 and PL-2 we can see that sampling more settings of decompositions did help to find a better decomposition. But a more principled way of learning the decomposition using Gibbs can achieve much better performance than PL-1 and PL-2.

Recovering Decompositions

First, we sample test functions from a known additive Gaussian Process prior with zero-mean and isotropic Gaussian kernel with bandwidth and scale for each function component. For input dimensions, we randomly sample decomposition settings that have at least two groups in the decomposition and at most 3 dimensions in each group.

[width=3em]DN 50 150 250 450
5
10
20
50
100
Table 1: Empirical posterior of any two dimensions correctly being grouped together by Gibbs sampling.

We set the burn-in period to be 50 iterations, and the total number of iterations for Gibbs sampling to be 100. In Tables 4 and 5, we show two quantities that are closely related to the learned empirical posterior of the decompositions with different numbers of randomly sampled observed data points (). Table 4 shows the probability of two dimensions being correctly grouped together by Gibbs sampling in each iteration of Gibbs sampling after the burn-in period, namely, . Table 5 reports the probability of two dimensions being correctly separated in each iteration of Gibbs sampling after the burn-in period, namely, . The results show that the more data we observe, the more accurate the learned decompositions are. They also suggest that the Gibbs sampling procedure can converge to the ground truth decomposition with enough data for relatively small numbers of dimensions. The higher the dimension, the more data we need to recover the true decomposition.

[width=3em]DN 50 150 250 450
2
5
10
20
50
100

Table 2: Empirical posterior of any two dimensions correctly being separated by Gibbs sampling.

Effectiveness of Learning Decompositions for Bayesian Optimization

To verify the effectiveness of the learned decomposition for Bayesian optimization, we tested on 2, 10, 20 and 50 dimensional functions sampled from a zero-mean Add-GP with randomly sampled decomposition settings (at least two groups, at most 3 dimensions in each group) and isotropic Gaussian kernel with bandwidth and scale . Each experiment was repeated 50 times. An example of a 2-dimensional function component is shown in the appendix. For Add-GP-UCB, we used for lower dimensions (), and for higher dimensions (). We show parts of the results on averaged cumulative regret and simple regret in Fig. 10, and the rest in the appendix. We compare Add-GP-UCB with known additive structure (Known), no partitions (NP), fully partitioned with one dimension for each group (FP) and the following methods of learning the decomposition: Gibbs sampling (Gibbs), randomly sampling the same number of decompositions sampled by Gibbs and select the one with the highest data likelihood (PL-1), randomly sampling 5 decompositions and selecting the one with the highest data likelihood (PL-2). For the latter two learning methods are referred to as “partial learning” in (Kandasamy et al., 2015). The learning of the decomposition is done every 50 iterations. Fig. 3 shows the improvement of learning decompositions with Gibbs over optimizing without partitions (NP).

Overall, the results show that Gibbs outperforms both of the partial learning methods, and for higher dimensions, Gibbs is sometimes even better than Known. Interestingly, similar results can be found in Fig. 3 (c) of (Kandasamy et al., 2015), where different decompositions than the ground truth may give better simple regret. We conjecture that this is because Gibbs is able to explore more than Known, for two reasons:

  1. Empirically, Gibbs changes the decompositions across iterations, especially in the beginning. With fluctuating partitions, even exploitation leads to moving around, because the supposedly “good” points are influenced by the partition. The result is an implicit “exploration” effect that is absent with a fixed partition.

  2. Gibbs sometimes merges “true” parts into larger parts. The parameter in UCB depends on the size of the part, (as in (Kandasamy et al., 2015)). Larger parts hence lead to larger and hence more exploration.

Of course, more exploration is not always better, but Gibbs was able to find a good balance between exploration and exploitation, which leads to better performance. Our preliminary experiments indicate that one solution to ensure that the ground truth decomposition produces the best result is to tune . Hyperparameter selection (such as choosing ) for BO is, however, very challenging and an active topic of research (e.g. (Wang et al., 2016a)).

Figure 3: Improvement made by learning the decomposition with Gibbs over optimizing without partitions (NP). (a) averaged cumulative regret; (b) simple regret. (c) averaged cumulative regret normalized by function maximum; (d) simple regret normalized by function maximum. Using decompositions learned by Gibbs continues to outperform BO without Gibbs.

Next, we test the decomposition learning algorithm on a real-world function, which returns the distance between a designated goal location and two objects being pushed by two robot hands, whose trajectory is determined by 14 parameters specifying the location, rotation, velocity, moving direction etc. This function is implemented with a physics engine, the Box2D simulator (Catto, 2011). We use add-GP-UCB with different ways of setting the additive structure to tune the parameters for the robot hand so as to push the object closer to the goal. The regrets are shown in Fig. 4. We observe that the performance of learning the decomposition with Gibbs dominates all existing alternatives including partial learning. Since the function we tested here is composed of the distance to two objects, there could be some underlying additive structure for this function in certain regions of the input space, e.g. when the two robots hands are relatively distant from each other so that one of the hands only impacts one of the objects. Hence, it is possible for Gibbs to learn a good underlying additive structure and perform effective BO with the structures it learned.

Figure 4: Simple regret of tuning the 14 parameters for a robot pushing task. Learning decompositions with Gibbs is more effective than partial learning (PL-1, PL-2), no partitions (NP), or fully partitioned (FP). Learning decompositions with Gibbs helps BO to find a better point for this tuning task.

5.2 Diverse Batch Sampling

Figure 5: Scaled simple regrets () and scaled averaged cumulative regrets () on synthetic functions with various dimensions when the ground truth decomposition is known. The batch sampling methods (Batch-UCB-PE, Batch-UCB-DPP, Batch-UCB-PE-Fnc and Batch-UCB-DPP-Fnc) perform comparably well and outperform random sampling (Rand) by a large gap.

Next, we probe the effectiveness of batch BO in high dimensions. In particular, we compare variants of the Add-UCB-DPP-BBO approach outlined in Section 4, and a baseline:

  • Rand: All batch points are chosen uniformly at random from .

  • Batch-UCB-*: *. All acquisition functions are UCB (Eq. 4.1). Exploration is done via PE or DPP with posterior covariance kernels for each group. Combination is via sampling without replacement.

  • *-Fnc: *. All quality functions are also UCB’s, and combination is done by maximizing the quality functions.

A direct application of existing batch selection methods is very inefficient in the high-dimensional settings where they differ more, algorithmically, from our approach that exploits decompositions. Hence, we only compare to uniform sampling as a baseline.

Effectiveness

We tested on , , and -dimensional functions sampled the same way as in Section 5.1; we assume the ground-truth decomposition of the feature space is known. Since Rand performs the worst, we show relative averaged cumulative regret and simple regret of all methods compared to Rand in Fig. 5. Results for absolute values of regrets are shown in the appendix. Each experiment was repeated for times. For all experiments, we set and . All diverse batch sampling methods perform comparably well and far better than Rand, although there exist slight differences. While in lower dimensions (), Batch-UCB-PE-Fnc performs among the best, in higher dimensions (), Batch-UCB-DPP-Fnc performs better than (or comparable to) all other variants. We will see a larger performance gap in later real-world experiments, showing that biasing the combination towards higher quality functions while retaining diversity across the batch of samples provides a better exploration-exploitation trade-off.

For a real-data experiment, we tested the diverse batch sampling algorithms for BBO on the Walker function which returns the walking speed of a three-link planar bipedal walker implemented in Matlab (Westervelt et al., 2007). We tune 25 parameters that may influence the walking speed, including 3 sets of 8 parameters for the ODE solver and 1 parameter specifying the initial velocity of the stance leg. We discretize each dimension into points, resulting in a function domain of . This size is very inefficient for existing batch sampling techniques. We learn the additive structure via Gibbs sampling and sample batches of size . To further improve efficiency, we limit the maximum size of each group to . The regrets for all methods are shown in Fig. 6. Again, all diverse batch sampling methods outperform Rand by a large gap. Moreover, Batch-UCB-DPP-Fnc is a bit better than other variants, suggesting that a selection by quality functions is useful.

Figure 6: The simple regrets () of batch sampling methods on Walker data where . Four diverse batch sampling methods (Batch-UCB-PE, Batch-UCB-DPP, Batch-UCB-PE-Fnc and Batch-UCB-DPP-Fnc) outperform random sampling (Rand) by a large gap. Batch-UCB-DPP-Fnc performs the best among the four diverse batch sampling methods.

Batch Sizes

Finally, we show how the batch size affects the performance of the proposed methods. We test the algorithms on the -dimensional Robot dataset with . The regrets are shown in Fig. 4. With larger batches, the differences between the batch selection approaches become more pronounced. In both settings, Batch-UCB-DPP-Fnc performs a bit better than other variants, in particular with larger batch sizes.

Figure 7: Simple regret when tuning the 14 parameters of a robot pushing task with batch size 5 and 10. Learning decompositions with Gibbs sampling and diverse batch sampling are employed simultaneously. In general, Batch-UCB-DPP-Fnc performs a bit better than the other four diverse batch sampling variants. The gap increases with batch size.

6 Conclusion

In this paper, we propose two novel solutions for high dimensional BO: inferring latent structure, and combining it with batch Bayesian Optimization. The experimental results demonstrate that the proposed techniques are effective at optimizing high-dimensional black-box functions. Moreover, their gain over existing methods increases as the dimensionality of the input grows. We believe that these results have the potential to enable the increased use of Bayesian optimization for challenging black-box optimization problems in machine learning that typically involve a large number of parameters.

Acknowledgements

We gratefully acknowledge support from NSF CAREER award 1553284, NSF grants 1420927 and 1523767, from ONR grant N00014-14-1-0486, and from ARO grant W911NF1410433. We thank MIT Supercloud and the Lincoln Laboratory Supercomputing Center for providing computational resources. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors.

References

  • Azimi et al. (2010) Azimi, Javad, Fern, Alan, and Fern, Xiaoli Z. Batch Bayesian optimization via simulation matching. In Advances in Neural Information Processing Systems (NIPS), 2010.
  • Catto (2011) Catto, Erin. Box2D, a 2D physics engine for games. http://box2d.org, 2011.
  • Contal et al. (2013) Contal, Emile, Buffoni, David, Robicquet, Alexandre, and Vayatis, Nicolas. Parallel Gaussian process optimization with upper confidence bound and pure exploration. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 225–240. Springer, 2013.
  • Desautels et al. (2014) Desautels, Thomas, Krause, Andreas, and Burdick, Joel W. Parallelizing exploration-exploitation tradeoffs in Gaussian process bandit optimization. Journal of Machine Learning Research, 2014.
  • Djolonga et al. (2013) Djolonga, Josip, Krause, Andreas, and Cevher, Volkan. High-dimensional Gaussian process bandits. In Advances in Neural Information Processing Systems (NIPS), 2013.
  • Duvenaud et al. (2013) Duvenaud, David, Lloyd, James Robert, Grosse, Roger, Tenenbaum, Joshua B., and Ghahramani, Zoubin. Structure discovery in nonparametric regression through compositional kernel search. In International Conference on Machine Learning (ICML), 2013.
  • González et al. (2016) González, Javier, Dai, Zhenwen, Hennig, Philipp, and Lawrence, Neil D. Batch Bayesian optimization via local penalization.

    International Conference on Artificial Intelligence and Statistics (AISTATS)

    , 2016.
  • Hennig & Schuler (2012) Hennig, Philipp and Schuler, Christian J. Entropy search for information-efficient global optimization. Journal of Machine Learning Research, 13:1809–1837, 2012.
  • Hernández-Lobato et al. (2014) Hernández-Lobato, José Miguel, Hoffman, Matthew W, and Ghahramani, Zoubin. Predictive entropy search for efficient global optimization of black-box functions. In Advances in Neural Information Processing Systems (NIPS), 2014.
  • Kandasamy et al. (2015) Kandasamy, Kirthevasan, Schneider, Jeff, and Poczos, Barnabas. High dimensional Bayesian optimisation and bandits via additive models. In International Conference on Machine Learning (ICML), 2015.
  • Kathuria et al. (2016) Kathuria, Tarun, Deshpande, Amit, and Kohli, Pushmeet. Batched Gaussian process bandit optimization via determinantal point processes. In Advances in Neural Information Processing Systems (NIPS), 2016.
  • Kawaguchi et al. (2015) Kawaguchi, Kenji, Kaelbling, Leslie Pack, and Lozano-Pérez, Tomás. Bayesian optimization with exponential convergence. In Advances in Neural Information Processing Systems (NIPS), 2015.
  • Kawaguchi et al. (2016) Kawaguchi, Kenji, Maruyama, Yu, and Zheng, Xiaoyu. Global continuous optimization with error bound and fast convergence. Journal of Artificial Intelligence Research, 56(1):153–195, 2016.
  • Kushner (1964) Kushner, Harold J. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. Journal of Fluids Engineering, 86(1):97–106, 1964.
  • Li et al. (2016) Li, Chun-Liang, Kandasamy, Kirthevasan, Póczos, Barnabás, and Schneider, Jeff. High dimensional Bayesian optimization via restricted projection pursuit models. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2016.
  • Moc̆kus (1974) Moc̆kus, J. On Bayesian methods for seeking the extremum. In Optimization Techniques IFIP Technical Conference, 1974.
  • Snoek et al. (2012) Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems (NIPS), 2012.
  • Srinivas et al. (2012) Srinivas, Niranjan, Krause, Andreas, Kakade, Sham M, and Seeger, Matthias W. Information-theoretic regret bounds for Gaussian process optimization in the bandit setting. IEEE Transactions on Information Theory, 2012.
  • Wang & Jegelka (2017) Wang, Zi and Jegelka, Stefanie. Max-value entropy search for efficient Bayesian optimization. In International Conference on Machine Learning (ICML), 2017.
  • Wang et al. (2016a) Wang, Zi, Zhou, Bolei, and Jegelka, Stefanie.

    Optimization as estimation with Gaussian processes in bandit settings.

    In International Conference on Artificial Intelligence and Statistics (AISTATS), 2016a.
  • Wang et al. (2017) Wang, Zi, Jegelka, Stefanie, Kaelbling, Leslie Pack, and Pérez, Tomás Lozano. Focused model-learning and planning for non-Gaussian continuous state-action systems. In International Conference on Robotics and Automation (ICRA), 2017.
  • Wang et al. (2016b) Wang, Ziyu, Hutter, Frank, Zoghi, Masrour, Matheson, David, and de Feitas, Nando. Bayesian optimization in a billion dimensions via random embeddings. Journal of Artificial Intelligence Research, 55:361–387, 2016b.
  • Westervelt et al. (2007) Westervelt, Eric R, Grizzle, Jessy W, Chevallereau, Christine, Choi, Jun Ho, and Morris, Benjamin. Feedback control of dynamic bipedal robot locomotion, volume 28. CRC press, 2007.

Appendix A Add-UCB-DPP-BBO Algorithm

We present four variants of Add-UCB-DPP-BBO in Algorithm 1. The algorithm framework is general in that, one can plug in other acquisition and quality functions other than UCB to get different algorithms.

  Input: , , , , ,
  Observe function values of points chosen randomly from
  Get the initial decomposition of feature space via Gibbs sampling and get corresponding ’s
  for  to  do
     if  then
        Learn the decomposition via Gibbs sampling and get corresponding ’s
     end if
     Choose by maximizing UCB (acquisition function) for each group and combine them
     for  to  do
        Compute and
        Sample via PE or DPP with kernel
     end for
     Combine either randomly or by maximizing UCB (quality function) without replacement to get
     Observe (noisy) function values for for .
  end for
Algorithm 1 Add-UCB-DPP-BBO Variants

Appendix B Additional experiments

[width=3em]N 50 150 250 350 450
5
10
20
50
100
Table 3: Rand Index of the decompositions computed by Gibbs sampling.
[width=3em]N 50 150 250 350 450
5
10
20
50
100
Table 4: Empirical posterior of any two dimensions correctly being grouped together by Gibbs sampling.
[width=3em]N 50 150 250 350 450
2
5
10
20
50
100

Table 5: Empirical posterior of any two dimensions correctly being separated by Gibbs sampling.

In this section, we provide more details in our experiments.

b.1 Optimization of the Acquisition Functions

We decompose the acquisition function into subacquisition functions, one for each part, and optimize those separately. We randomly sample 10000 points in the low dimensional space, and then choose the one with the best value to start gradient descent in the search space (i.e. the range of the box on ). In practice, we observe this approach optimizes low-dimensional ( dimensions) functions very well. As the number of dimensions grows, the known difficulties of high dimensional BO (and global nonconvex optimization) arise.

b.2 Effectiveness of Decomposition Learning

Recovering Decompositions

In Table 6, Table 4 and Table 5, we show three quantities which may imply the quality of the learned decompositions. The first quantity , reported in Table 6, is the Rand Index of the decompositions learned by Gibbs sampling, namely, . The second quantity, reported in Table 4, is the probability of two dimensions being correctly grouped together by Gibbs sampling in each iteration of Gibbs sampling after the burn-in period, namely, . The third quantity, reported in Table 5, is the probability of two dimensions being correctly separated by Gibbs sampling in each iteration of Gibbs sampling after the burn-in period, namely, .

Sensitivity Analysis for

Empirically, we found that the quality of the learned decompositions is not very sensitive to the scale of (see Table 6), because the log data likelihood plays a much more important role than when is less than the total number of dimensions. The reported results correspond to alpha = 1 for all the partitions.

[width=3em]N 50 150 250 350 450


Table 6: Rand Index of the decompositions learned by Gibbs sampling for different values of .

BO for Synthetic Functions

We show an example of a 2 dimensional function component in the additive synthetic function in Fig. 8. Because of the numerous local maxima, it is very challenging to achieve the global optimum even for 2 dimensions, let alone maximizing an additive sum of them, only by observing their sum. The full results of the simple and cumulative regrets for the synthetic functions comparing Add-GP-UCB with known additive structure (Known), no partitions (NP), fully partitioned with one dimension for each group (FP) and the following methods of learning partition: Gibbs sampling (Gibbs), random sampling the same number of partitions sampled by Gibbs and select the one with the highest data likelihood (PL-1), random sampling 5 partitions and select the one with the highest data likelihood (PL-2) are shown in Fig. 10. The learning was done every 50 iterations, starting from the first iteration. For , it is quite obvious that when a new partition is learned from the newly observed data (e.g. at iteration 100 and 150), the simple regret gets a boost.

Figure 8: An example of a 2 dimensional function component of the synthetic function.
Figure 9: Simple regret of tuning the 25 parameters for optimizing the walking speed of a bipedal robot. We use the vanilla Gibbs sampling algorithm (Gibbs) and a Gibbs sampling algorithm with partition size limit set to be 2 (Gibbs-L) to compare with partial learning (PL-1, PL-2), no partitions (NP), and fully partitioned (FP). Gibbs-L performed slightly better than PL-2 and FP. This function does not have an additive structure, and as a result, Gibbs does not perform well for this function because the sizes of the groups it learned tend to be large .
Figure 10: The simple regrets () and the averaged cumulative regrets () and for Known (ground truth partition is given), Gibbs (using Gibbs sampling to learn the partition), PL-1 (randomly sample the same number of partitions sampled by Gibbs and select the one with highest data likelihood), PL-2 (randomly sample 5 partitions and select the one with highest data likelihood), FP (fully partitioned, each group with one dimension) and NP (no partition) on 10, 20, 50 dimensional functions. Gibbs achieved comparable results to Known. Comparing PL-1 and PL-2 we can see that sampling more partitions did help to find a better partition. But a more principled way of learning partition using Gibbs can achieve much better performance than PL-1 and PL-2.

BO for Real-world functions

In addition to be 14 parameter robot pushing task, we tested on the walker function which returns the walking speed of a three-link planar bipedal walker implemented in Matlab (Westervelt et al., 2007). We tune 25 parameters that may influence the walking speed, including 3 sets of 8 parameters for the ODE solver and 1 parameter specifying the initial velocity of the stance leg. To our knowledge, this function does not have an additive structure. The regrets of each decomposition learning methods are shown in Fig. 9. In addition to Gibbs, we test learning decomposition via constrained Gibbs sampling (Gibbs-L), where the maximum size of each group of dimensions does not exceed 2. Because the function does not have additive structure, Gibbs performed poorly since it groups together many dimensions of the input. As a result, its performance is similar to that of no partition (NP). However, Gibbs-L appears to learn a good decomposition with the group size limit, and manages to achieve a slightly lower regret than other methods. Gibbs, PL-1, PL-2 and FP all performed relatively well in for this function, indicating that using the additive structure may benefit the BO procedure even if the function itself is not additive.

b.3 Diverse Batch Sampling

In Fig. 11, we show the full results of the simple and the cumulative regrets on the synthetic functions described in Section 5.2 of the paper.

Figure 11: The simple regrets () and the averaged cumulative regrets () on synthetic functions with various dimensions when the ground truth partition is known. Four batch sampling methods (Batch-UCB-PE, Batch-UCB-DPP, Batch-UCB-PE-Fnc and Batch-UCB-DPP-Fnc) perform comparably well and outperform random sampling (Rand) by a large gap.