Ordinal Bayesian Optimisation

12/05/2019
by   Victor Picheny, et al.
0

Bayesian optimisation is a powerful tool to solve expensive black-box problems, but fails when the stationary assumption made on the objective function is strongly violated, which is the case in particular for ill-conditioned or discontinuous objectives. We tackle this problem by proposing a new Bayesian optimisation framework that only considers the ordering of variables, both in the input and output spaces, to fit a Gaussian process in a latent space. By doing so, our approach is agnostic to the original metrics on the original spaces. We propose two algorithms, respectively based on an optimistic strategy and on Thompson sampling. For the optimistic strategy we prove an optimal performance under the measure of regret in the latent space. We illustrate the capability of our framework on several challenging toy problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 11

05/10/2021

Bayesian Optimistic Optimisation with Exponentially Decaying Regret

Bayesian optimisation (BO) is a well-known efficient algorithm for findi...
04/25/2019

A Bayesian Approach for the Robust Optimisation of Expensive-To-Evaluate Functions

Many expensive black-box optimisation problems are sensitive to their in...
06/07/2021

High-Dimensional Bayesian Optimisation with Variational Autoencoders and Deep Metric Learning

We introduce a method based on deep metric learning to perform Bayesian ...
10/27/2014

Heteroscedastic Treed Bayesian Optimisation

Optimising black-box functions is important in many disciplines, such as...
02/12/2020

Regret Bounds for Noise-Free Bayesian Optimization

Bayesian optimisation is a powerful method for non-convex black-box opti...
02/10/2022

Bayesian Optimisation for Mixed-Variable Inputs using Value Proposals

Many real-world optimisation problems are defined over both categorical ...
01/12/2020

Bayesian Quantile and Expectile Optimisation

Bayesian optimisation is widely used to optimise stochastic black box fu...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We address typical Bayesian optimisation (BO) problems, of the form:

with is usually a bounded hyperrectangle, is a scalar-valued objective function, available only through noisy observations .

BO is established as a strong competitor among derivative-free optimisation approaches, in particular for computationally expensive (low data regime) problems. In BO, non-parametric Gaussian processes (GPs) provide flexible and fast-to-evaluate surrogates of the objective functions. Sequential design decisions, so-called acquisitions, judiciously balance exploration and exploitation in search for global optima, leveraging the uncertainty estimates provided by the GP posterior distributions (see

Mockus et al. (1978); Jones et al. (1998) for early works or Shahriari et al. (2015) for a recent review).

One of the weaknesses of vanilla BO lies in the underlying assumption that the objective function is a realisation of a GP: when this assumption is strongly violated, the GP model is weakly predictive and BO becomes inefficient. Two classical examples where BO fails are ill-conditioned problems, when the objective function has strong variations on the domain boundaries but is very flat in its central region (or conversely), and non-Lipschitz objectives, for instance with local discontinuities. High conditioning is typical in “exploratory” optimisation problems, when the parameter space is initially chosen very large. Discontinuities are frequent in computational fluid dynamics problems for instance, where a small change in the parameters results in a change of physics (e.g. laminar to turbulent flow), which creates a discontinuity in the objective.

One remedy to this problem is to add a warping function, either on the output space (Snelson et al., 2004) or on the input space (Snoek et al., 2014; Marmin et al., 2018). However, warping usually applies only to continuous functions, and rely on parametric forms, which need to be chosen beforehand and may not adapt to the problem at hand. A popular alternative is to rely on hierarchical partitions of the input space (assuming stationarity only within each part): see for instance Gramacy and Lee (2008); Fox and Dunson (2012), but those approaches are in general efficient in small dimension and with relatively large datasets.

In this work, we propose to apply an “ordinal” warping to both input and output data, that is, a transformation that only preserves the ordering of the variables. A classical (latent) GP model is then fitted to the transformed dataset. In the output space, this amounts to performing ordinal regression using a variational formulation (Chu and Ghahramani, 2005). In the input space, we show that this amounts to defining a large optimisation problem, which can be solved using standard descent algorithms.

We then study how this model can be used to perform Bayesian optimisation, with minimal use of the original problem metrics. We show that this can be achieved by combining classical acquisition schemes such as upper confidence bound or Thompson sampling and tree search. Although BO has already been applied to problems with qualitative objectives (González et al., 2017), we believe that our approach is the first that is agnostic to any metric in the input and the output spaces.

There are a small number of works characterizing the performance of BO on GPs under optimistic acquisition functions. All these works however consider well behaved GPs where, in particular, the so called information gain is bounded nicely (see Sec. 4 for more detail). In Srinivas et al. (2010), an upper bound on cumulative regret was shown for GP-UCB a confidence bound based approach where is an upper bound on information gain. Chowdhury and Gopalan (2017) and Javidi and Shekhar (2018) improved the constants in the regret of confidence based policies. Inspired by these works we characterize the regret performance of the proposed confidence bound based policy in the latent space.

Our approach is illustrated on several toy problems, showing that it is able to optimise severely ill-conditioned and discontinuous functions.

2 Model

2.1 Definitions and main hypothesis

Ordinal warping for discrete sets

We propose to use as a warping any transformation that preserves the ordering of a finite vector with

real values. Without loss of generality, given a set , we can write such transformation in the form:

(1)

where denotes the rank function, , and are some strictly positive values. It is straightforward that is a bijection, moreover , and choosing results with the rank transformation: .

Latent GP model

Let us assume that we have a set of observations of the form . We define one ordinal warping for , (with an underlying set ) and warpings for each dimension of , (each dimension with an underlying set ). For each , we denote:

(2)
(3)

The overall idea is that both and can be mapped respectively through and to and such that:

(4)

with and a stationary covariance kernel, for instance of the exponential or Matérn classes (Williams and Rasmussen, 2006). The input warping is illustrated in Figure 1.

Figure 1: Original and warped spaces. Notive how the ordering is preserved from both to and to .

Intuitively, such an approach allows us to tackle the problem where the user only returns pairwise comparisons, such as “ is smaller than and is larger than ”, which makes it insensitive to scales. By considering a stationary GP for , we report all the modeling difficulty to the warping step. Note that although such warpings may be very difficult to infer over continuous spaces, we only consider here discrete sets as in Equation 1.

2.2 Learning and using variational inference

In the classical GP regression framework, observations ’s are assumed to correspond to evaluations of a latent GP corrupted by Gaussian noise, . By doing so, the likelihood function is set as Gaussian, which allows to apply the classical Bayes’ rule and obtain a posterior distribution on :

As all quantities are Gaussian, the posterior distribution can be expressed in closed form.

In the non-conjugate case, exact computation is not tractable and one must resort to approximations. Variational Inference (VI), which consists in minimising the Kullback-Leibler divergence between the approximate and the true posterior, has proven to be an effective approach in this context. We show in the following that applying ordinal warping to the outputs amounts to choosing a non-conjugate likelihood. We then express the corresponding classical VI problem formulation, which amounts to optimising a lower bound on the marginal log-likelihood. Finally, we show how we can incorporate the parameters of the input warping into the VI problem and learn the warping parameters along with the VI ones.

Output ordinal warping using ordinal regression likelihood

We follow the model of Chu and Ghahramani (2005), that specifies bins for each observation. Define , is an arbitrary real value, , , and . Assuming that the observations are in increasing order (), the likelihood functions can be expressed as:

(5)

with

the standard Gaussian cumulative distribution function. The term

corresponds to a (small) noise in the latent functions.

Elbo

We now follow the classical Variational GP (VGP) framework (Titsias, 2009; Hensman et al., 2013). To compute the data likelihood, we only need the marginal posterior distribution of the GP at the (warped) inputs , denoted as . We propose an approximate posterior in which we directly parametrise the distribution of function values at the inputs as a multivariate normal, with mean and covariance , where and are optimisation parameters.

Conditioned on we obtain the approximate posterior GP where the mean and the covariance can be calculated in closed form:

where .

With this approximation in place we can set up our model’s optimisation objective, which is a lower bound on the log marginal likelihood (ELBO, Hoffman et al., 2013), equal to

where and is the ordinal likelihood (5).

In practice, the expectation cannot be evaluated analytically, but as the likelihood factorises over data points this is just a one-dimensional integral, which can easily be computed numerically using Gauss–Hermite quadrature.

Optimisation

Now, is optimised with respect to three sets of variables: a) the inducing variables, and ; b) the likelihood parameters, and ; c) the input warping parameters, as each warping is defined using values, but we can set arbitrarily .

Note that since the distances between points are set by the ’s and the amplitude of the response is set through the ’s, we can define

using a stationary kernel with unit variance and lengthscale.

To ease the problem resolution in practice, we restrict to be diagonal and add a set of boundary constraints when solving the ELBO optimisation problem. Each value and are bounded between a (small) strictly positive value and a maximum. As we use a unit kernel variance and lengthscale, we can set those maxima to values related to resp. the amplitude of the GP and such that the covariance between two points distanced by is close to zero). Note that during BO, these bounds can be reduced to bound the maximum variation of

between two consecutive steps, as we detail in Section 4. In our implementation, this problem is solved by stochastic gradient descent, leveraging automatic differentiation tools. The bound constraints are handled using logistic transformations.

3 Bayesian optimisation

3.1 Acquisitions on latent and original spaces

Standard BO algorithms work as follow. An initial set of experiments is generated, typically using a space-filling design (Pronzato and Müller, 2012) over , and a GP model is trained on this dataset. Then, an acquisition rule is applied repeatedly, that consists of evaluating at the input that maximises an acquisition function. Every time a new data point is acquired, the GP posterior distribution is updated to account for it.

The acquisition function is based on the GP distribution and balances between exploration (high GP variance) and exploitation (low GP mean). Typical acquisitions include Expected improvement (EI, Jones et al., 1998) and upper confidence bound (UCB, Srinivas et al., 2010).

In our case, given that is a stationary GP, it is direct to predict the posterior distribution for a new value . Hence, classical acquisition functions apply and it is straightforward to select a point to acquire.

However, the mapping is only defined from to , and it is not possible to find the that corresponds to without either 1- creating a new ordinal warping , which requires the value of

or 2- generalising the warpings, say by linear interpolation

111e.g. if then we choose ., which contradicts our metric-free principle.

Instead, we leverage the fact that the ordinal input warping implies a one-to-one mapping between hyper-rectangle cells, i.e. , which are determined by the rank of with respect to the existing (see Figure 3).

Hence, by choosing we guarantee to have within given bounds, but by being “truly agnostic” with respect to the metrics of the original space, we cannot be more precise about the location of .

Then, instead of using acquisitions functions that return a value for a new point, we need functions that evaluate cells. We show in the following how to adapt two acquisition strategies, LCB and Thompson sampling, to this framework.

3.2 Lower confidence bound

The GP-UCB strategy of Srinivas et al. (2010) uses an optimistic upper confidence bound for aiming at a maximization problem. Similarly, we use a lower confidence bound as follows for our minimization problem:

where is a quantity that generally grows with . Assume now that . The LCB (somehow twice optimistic) would become:

(6)

3.3 Thompson sampling (TS)

The principle of TS is to choose actions in proportion to the probability that they are optimal. For GPs, a simple way to do so is to generate a sample from the posterior of

and pick its minimiser as the next point to evaluate. This requires however to discretise the input space (using a fine Cartesian grid or a low discrepancy sequence).

The main difficulty of the vanilla TS-GP is the discretisation of the input space. This problem is removed here as the action to take is to choose the best cell. We may choose a cell according to the probability that it maximises the expected reward. This can be achieved by repeatedly 1- sampling one random in each cell , 2- sampling jointly , and 3- recording which cell is optimal. Then, the cell where a new sample is drawn is chosen with probability proportional to the number of times it was optimal.

3.4 Domain decomposition

With points, the ordinal warping naturally defines a decomposition of the search space into cells. Our first strategy is to consider the acquisition values over all of those cells. We call “exhaustive” such approach. Although this works without issue in small dimension () and low data regime (say, ), the number of cells grows very quickly with the number of added observations and dimension, and computing the acquisition value of each cell can rapidly become impractical.

An alternative is to use a hierarchical partitioning, that is, starting from the initial exhaustive decomposition induced by the first points, to only create new cells by dividing the cell where the new observation is generated. With this strategy, only cells are added for every new observation, leading to a total of . We refer to this approach as “tree search”. Both approaches are illustrated in Figure 2.

Figure 2: Domain splitting when a new observation (blue dot) is added to an initial four-cell partition: tree-search (middle, leading to seven cells), exhaustive (right, nine cells).

3.5 Algorithms

The pseudo-code of the Thompson sampling algorithm is given in algorithm 1. The LCB algorithm has a simpler but similar structure, as the steps 9:13 are replaced by computing the LCB criterion of eq. 6, and the cell is simply chosen as the one that maximises the LCB. The acquisition steps are illustrated in Figure 3. On both cases, the warping is updated every time a new input point is observed to account for the addition of a new pair , which is done by introducing a new value and a set of values. The variational parameters and are augmented, resp. with one value and with one row and column. Then, the ELBO is trained again, which updates all the parameters listed in Section 2.2.

Figure 3: From left to right: A) Contour lines of the objective and initial values (blue dots). The numbers correspond to the rank of the observations. B) Contour lines of the lcb in the latent space, along with the values (orange dots). Sampled minima for TS are shown in black crosses. C) Probability of containing the minimum by cell in the original space D) LCB by cell in the original space. New proposed samples are shown in purple crosses.
1:Choose ,
2:Sample uniformly on .
3:Add extremes:
4:Evaluate .
5:Create cells using .
6:Initialize warping and variational parameters
7:Create and train GP model for by optimizing the ELBO
8:for  to  do
9:     Update cells of according to values.
10:     for  to  do
11:         Generate sampling points in (each randomly drawn inside a different cell).
12:         Draw one sample of
13:         Record which cell contains the minimiser of the sample
14:     end for
15:     Choose one cell in with probability according to the number of times it contained the minimiser.
16:     Generate a random value inside this cell
17:     Evaluate
18:     Split the cell into 4 new ones.
19:     Update warping and variational parameters
20:     Optimize the ELBO
21:end for
Algorithm 1 Pseudo-code for the Thompson sampling with tree search

4 Analysis

Our method is designed agnostic to the metric in the original space. Thus, regret in original space is not well-defined in the scope of the (very general) formulation of this paper. Obtaining results in the original space is not out of reach, but implies making substantially restricting (e.g., Lipschitz) assumptions on the original function, which in some sense defeats the purpose of this work. Hence, we focus on the latent space to show the convergence of the method.

Our analysis is inspired by the analysis of GP-UCB in Srinivas et al. (2010). In Srinivas et al. (2010), the observation locations are static, while in our case they vary over time. Our problem thus has an additional difficulty which requires new developments. Specifically, we use a bound on the amount of variation in the location of observations to establish new bounds on information gain that results in regret bounds for a lcb method with particular dynamics of observation points. This may be a valuable contribution for other contexts with dynamic data sets.

The values of observation points () vary in each iteration with injecting new observations as a result of the update of warping parameters described in Sec. 2. Specifically, thus far we have simplified the notation of the value of observation points in the latent space by removing the superscripts corresponding to the iterations. The superscript specifies the number of observations used in determining the warping parameters. Thus, denotes the value of -th observation when observations are used in determining the warping parameters. Notice that denotes the location of -th observation based on the warping determined by the previous observations, while denotes its location after updating the warping with this nth observation.

The performance measure specified as regret defined as the cumulative loss in compared to its optimum value. Let . Define

(7)

where is the new observation point at iteration conditioned on the previous observation points . The regret order determines the rate of convergence to optimum value of . Not only a sublinear regret guarantees the convergence to the optimum value, the regret measure also accounts for the intermediate values of the observations and makes sure the overall loss is not too large.

Regret can be bounded in terms of the maximum amount any algorithm could learn about the objective function. Srinivas et al. (2010) excellently characterized this intuition using an information theoretic measure referred to as information gain whose value depends on the observation points and the kernel function. Specifically,

(8)

where and

is the identity matrix. The regret upper bound is established based on the following two lemmas. In Lemma 

1, instantaneous regret at iteration is upper bounded by the variance of the new observation point at iteration up to constants. In Lemma 2, the cumulative value of such variances defined as

(9)

is upper bounded by the upper bound on information gain from observations up to a constant independent of . Combining these two lemmas, Theorem 1 gives the upper bound on the regret of lcb policy.

Our analysis requires that the observation set is compact; in particular, is a bounded subset of . Without loss of generality we assume

(10)

We further have the following assumption on the smoothness of the kernel (the same as in Srinivas et al. (2010)).

Assumption 1.

For some constants ,

(11)
Lemma 1.

For some , let . We have, for all ,

(12)

with probability at least .

Proof. See Appendix A.1.

The cumulative variance of the observed points in all iterations is an indicator of the total reduced uncertainty in the value of the function after observations and is a key parameter in characterizing the regret of lcb. Variation of observations points over iterations however makes it difficult to bound . To account for this variation, the following condition is imposed through the warping step

(13)

for where for some constant independent of and . In practice, Eq. 13 is enforced by constraining each component of to be close to those of when re-optimising the ELBO.

Lemma 2.

The cumulative variance defined in (9) is upper bounded as follows

(14)

where and are constants independent of (given in appendix A) and is an upper bound on information gain as defined in (8).

Proof. See Appendix A.2.

See Srinivas et al. (2010) for the detail on the value of for several kernels. For example they show a and a for finite spectrum and Squared Exponential kernels, respectively.

The following regret upper bound follows from the results established in Lemmas 1 and 2.

Theorem 1.

The regret of lcb over the compact set specified in (10) under Assumption 1 satisfies,

where , are the constants in Lemma 2 and is specified in the proof.

Proof of Theorem 1.

We can write the cumulative regret as the sum of the instantaneous regrets and use Lemmas 1 and 2 to obtain

(15)
(16)
(18)

Inequality (15) comes from Lemma 1, (16) is a result of Cauchy-Schwarz inequality, (4) is obtained by the fact that is increasing in and (18) is a direct application of Lemma 2. The theorem holds with .

5 Experiments

As a proof of concept, we consider a set of toy problems: a 1D function (depicted in Figure 1), three 2D functions, one with many discontinuities and two from a classical optimisation benchmarks (Hansen et al., 2016), namely ”bent cigar” and ”different power” (depicted in Figure 6) and a classical 4D function, ”Hartman” (Dixon, 1978). The 1D function has a critical discontinuity at optimum, the ”bent cigar” and ”different power” functions are unimodal but very challenging, due to a high conditioning and a very narrow optimum region. The other 2D function is multimodal, contains many discontinuities in the optimal region and has high conditioning. The 4D function serves as a reference, as vanilla BO is known to perform well on it.

We compare our two algorithms, TS and UCB to a vanilla BO based on expected improvement. For both TS and UCB, we use the tree-search partition; UCB is run with a fixed of . For all methods, an initial set of five experiments is generated by space-filling design, followed by 20 iterations. Each strategy is run 10 times with different initial conditions. Performance is measured in terms of cumulative regret and reported in Figure 7.

All methods are implemented using gpflow (Matthews et al., 2017), and all GPs use the same Matern3/2 kernel. For the ordinal approach, the optimisation problem 2.2 is solved using Adam (Kingma and Ba, 2015).

We can see that on the challenging functions, while vanilla BO struggles to optimise the function, both of our algorithms significantly outperform BO, in particular during the first steps, despite the difficulty of the problem. Here, LCB performed better than TS. On the classical Hartman function, our approach is only marginally outperformed by standard BO on the classical problem.

Figure 4 shows the state of single run of our approach with LCB (after 8 acquisition points added to an initial set of 5 points). We see that the latent model accounts for the discontinuity by setting a large distance between the points just before and just after it. As a result, our algorithm is capable of exploring the discontinuity region, while a vanilla BO algorithm would have rejected it rapidly. One may also notice that a large region (left) is considerably reduced in the latent space. Although this appears as beneficial on this run, it also indicates that our algorithm might be less global than vanilla BO. Such issue could be addressed by mixing our approach with metric-based strategies, for instance by sampling from time to time inside the largest cell in the original space.

Figure 4: Example run of LCB on the 1D function.

Figure 5 shows a single run of our approach with LCB (with 30 acquisition points). One can observe that most observations form clusters around local and global optima, while some regions are largely ignored.

Figure 5: Example run of LCB on the ”many steps” function.

6 Concluding comments

In this work, we proposed a novel BO approach that does not consider the values of the inputs or outputs, but only their respective ordering. Our algorithm is based on a Variational GP model and a set of ordinal warpings. We showed how such model could be used to refine either strategies of LCB or Thompson sampling. We proved an upper bound on the regret of confidence bound based approached in the warped space, and demonstrated the capability of our algorithm on a challenging toy problem.

Future work may include the analysis of Thompson sampling and a more comprehensive experimental comparison to the existing approaches over a wider range of ill-conditioned objective functions.

Appendix A.1: Proof of Lemma 1

From assumption 1, for all , with probability at least ,

(19)

Let . We have that with probability greater than ,

(20)

Assumption indicates that for all and . Let be a discretisation of with size (with ) at iteration such that, for all

(21)

where is the closest point to in . This discretisation is possible by uniformly spreading points along each coordinate of .

Choosing , from (20) and (21), we have with probability at least

(22)

has a normal distribution. Thus

(23)

Replacing and using union bound over all and , we have with probability at least , for all and

(24)

Now we have all the material needed to prove the lemma. By definition of the acquisition rule:

Using an union bound on (22) and (24), we have with probability at least

which shows with probability at least

(25)

Applying (24) to we have

(26)

where (26) is a result of (25).

Appendix A.2: Proof of Lemma 2

The following equation is a direct result of Lemma 5.4 in Srinivas et al. (2010) on :

By definition of , we have

Inequality (Appendix A.2: Proof of Lemma 2) holds since for all and .

Summing both sides over from to we get

(28)

The first term on the right hand side of (28) captures the change in the information gain by the variation of the observation points. The second term is a scaled information gain.

Let and denote the covariance matrices of and , respectively. Since the maximum replacement of from iteration to iteration is , we have

(29)

Thus, for the first term on the right hand side of (28), we have

(30)

where the last inequality come from condition on . Combining (28) and (30), we get

(31)

where and .

Appendix A.3: Experiments test functions and regret curves

Figure 6: 2D test functions.
Figure 7: Cumulative regrets on the five test problems.

References

  • Chowdhury and Gopalan (2017) Chowdhury, S. R. and Gopalan, A. (2017). On kernelized multi-armed bandits. In

    International Conference on Machine Learning

    , pages 844–853.
  • Chu and Ghahramani (2005) Chu, W. and Ghahramani, Z. (2005). Gaussian processes for ordinal regression. Journal of machine learning research, 6(Jul):1019–1041.
  • Dixon (1978) Dixon, L. C. W. (1978). The global optimization problem. an introduction. Toward global optimization, 2:1–15.
  • Fox and Dunson (2012) Fox, E. and Dunson, D. B. (2012). Multiresolution gaussian processes. In Advances in Neural Information Processing Systems, pages 737–745.
  • González et al. (2017) González, J., Dai, Z., Damianou, A., and Lawrence, N. D. (2017). Preferential bayesian optimization. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1282–1291. JMLR. org.
  • Gramacy and Lee (2008) Gramacy, R. B. and Lee, H. K. H. (2008). Bayesian treed gaussian process models with an application to computer modeling. Journal of the American Statistical Association, 103(483):1119–1130.
  • Hansen et al. (2016) Hansen, N., Auger, A., Mersmann, O., Tusar, T., and Brockhoff, D. (2016). Coco: A platform for comparing continuous optimizers in a black-box setting. arXiv preprint arXiv:1603.08785.
  • Hensman et al. (2013) Hensman, J., Fusi, N., and Lawrence, N. D. (2013). Gaussian Processes for Big Data.

    Uncertainty in Artificial Intelligence

    .
  • Hoffman et al. (2013) Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochastic Variational Inference. Journal of Machine Learning Research.
  • Javidi and Shekhar (2018) Javidi, T. and Shekhar, S. (2018). Gaussian process bandits with adaptive discretization. Electron. J. Statist., 12(2):3829–3874.
  • Jones et al. (1998) Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492.
  • Kingma and Ba (2015) Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization.
  • Marmin et al. (2018) Marmin, S., Ginsbourger, D., Baccou, J., and Liandrat, J. (2018). Warped gaussian processes and derivative-based sequential designs for functions with heterogeneous variations. SIAM/ASA Journal on Uncertainty Quantification, 6(3):991–1018.
  • Matthews et al. (2017) Matthews, D. G., Alexander, G., Van Der Wilk, M., Nickson, T., Fujii, K., Boukouvalas, A., León-Villagrá, P., Ghahramani, Z., and Hensman, J. (2017).

    Gpflow: A gaussian process library using tensorflow.

    The Journal of Machine Learning Research, 18(1):1299–1304.
  • Mockus et al. (1978) Mockus, J., Tiesis, V., and Zilinskas, A. (1978). The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117-129):2.
  • Pronzato and Müller (2012) Pronzato, L. and Müller, W. G. (2012). Design of computer experiments: space filling and beyond. Statistics and Computing, 22(3):681–701.
  • Shahriari et al. (2015) Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and De Freitas, N. (2015). Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175.
  • Snelson et al. (2004) Snelson, E., Ghahramani, Z., and Rasmussen, C. E. (2004). Warped gaussian processes. In Advances in neural information processing systems, pages 337–344.
  • Snoek et al. (2014) Snoek, J., Swersky, K., Zemel, R., and Adams, R. (2014). Input warping for bayesian optimization of non-stationary functions. In International Conference on Machine Learning, pages 1674–1682.
  • Srinivas et al. (2010) Srinivas, N., Krause, A., Kakade, S., and Seeger, M. (2010). Gaussian process optimization in the bandit setting: no regret and experimental design. In Proceedings of the 27th International Conference on International Conference on Machine Learning, pages 1015–1022. Omnipress.
  • Titsias (2009) Titsias, M. (2009). Variational Learning of Inducing Variables in Sparse Gaussian Processes. Artificial Intelligence and Statistics.
  • Williams and Rasmussen (2006) Williams, C. K. and Rasmussen, C. E. (2006). Gaussian processes for machine learning. MIT press Cambridge, MA.

References

  • Chowdhury and Gopalan (2017) Chowdhury, S. R. and Gopalan, A. (2017). On kernelized multi-armed bandits. In

    International Conference on Machine Learning

    , pages 844–853.
  • Chu and Ghahramani (2005) Chu, W. and Ghahramani, Z. (2005). Gaussian processes for ordinal regression. Journal of machine learning research, 6(Jul):1019–1041.
  • Dixon (1978) Dixon, L. C. W. (1978). The global optimization problem. an introduction. Toward global optimization, 2:1–15.
  • Fox and Dunson (2012) Fox, E. and Dunson, D. B. (2012). Multiresolution gaussian processes. In Advances in Neural Information Processing Systems, pages 737–745.
  • González et al. (2017) González, J., Dai, Z., Damianou, A., and Lawrence, N. D. (2017). Preferential bayesian optimization. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1282–1291. JMLR. org.
  • Gramacy and Lee (2008) Gramacy, R. B. and Lee, H. K. H. (2008). Bayesian treed gaussian process models with an application to computer modeling. Journal of the American Statistical Association, 103(483):1119–1130.
  • Hansen et al. (2016) Hansen, N., Auger, A., Mersmann, O., Tusar, T., and Brockhoff, D. (2016). Coco: A platform for comparing continuous optimizers in a black-box setting. arXiv preprint arXiv:1603.08785.
  • Hensman et al. (2013) Hensman, J., Fusi, N., and Lawrence, N. D. (2013). Gaussian Processes for Big Data.

    Uncertainty in Artificial Intelligence

    .
  • Hoffman et al. (2013) Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochastic Variational Inference. Journal of Machine Learning Research.
  • Javidi and Shekhar (2018) Javidi, T. and Shekhar, S. (2018). Gaussian process bandits with adaptive discretization. Electron. J. Statist., 12(2):3829–3874.
  • Jones et al. (1998) Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492.
  • Kingma and Ba (2015) Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization.
  • Marmin et al. (2018) Marmin, S., Ginsbourger, D., Baccou, J., and Liandrat, J. (2018). Warped gaussian processes and derivative-based sequential designs for functions with heterogeneous variations. SIAM/ASA Journal on Uncertainty Quantification, 6(3):991–1018.
  • Matthews et al. (2017) Matthews, D. G., Alexander, G., Van Der Wilk, M., Nickson, T., Fujii, K., Boukouvalas, A., León-Villagrá, P., Ghahramani, Z., and Hensman, J. (2017).

    Gpflow: A gaussian process library using tensorflow.

    The Journal of Machine Learning Research, 18(1):1299–1304.
  • Mockus et al. (1978) Mockus, J., Tiesis, V., and Zilinskas, A. (1978). The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117-129):2.
  • Pronzato and Müller (2012) Pronzato, L. and Müller, W. G. (2012). Design of computer experiments: space filling and beyond. Statistics and Computing, 22(3):681–701.
  • Shahriari et al. (2015) Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and De Freitas, N. (2015). Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175.
  • Snelson et al. (2004) Snelson, E., Ghahramani, Z., and Rasmussen, C. E. (2004). Warped gaussian processes. In Advances in neural information processing systems, pages 337–344.
  • Snoek et al. (2014) Snoek, J., Swersky, K., Zemel, R., and Adams, R. (2014). Input warping for bayesian optimization of non-stationary functions. In International Conference on Machine Learning, pages 1674–1682.
  • Srinivas et al. (2010) Srinivas, N., Krause, A., Kakade, S., and Seeger, M. (2010). Gaussian process optimization in the bandit setting: no regret and experimental design. In Proceedings of the 27th International Conference on International Conference on Machine Learning, pages 1015–1022. Omnipress.
  • Titsias (2009) Titsias, M. (2009). Variational Learning of Inducing Variables in Sparse Gaussian Processes. Artificial Intelligence and Statistics.
  • Williams and Rasmussen (2006) Williams, C. K. and Rasmussen, C. E. (2006). Gaussian processes for machine learning. MIT press Cambridge, MA.