1 Introduction
We address typical Bayesian optimisation (BO) problems, of the form:
with is usually a bounded hyperrectangle, is a scalarvalued objective function, available only through noisy observations .
BO is established as a strong competitor among derivativefree optimisation approaches, in particular for computationally expensive (low data regime) problems. In BO, nonparametric Gaussian processes (GPs) provide flexible and fasttoevaluate surrogates of the objective functions. Sequential design decisions, socalled acquisitions, judiciously balance exploration and exploitation in search for global optima, leveraging the uncertainty estimates provided by the GP posterior distributions (see
Mockus et al. (1978); Jones et al. (1998) for early works or Shahriari et al. (2015) for a recent review).One of the weaknesses of vanilla BO lies in the underlying assumption that the objective function is a realisation of a GP: when this assumption is strongly violated, the GP model is weakly predictive and BO becomes inefficient. Two classical examples where BO fails are illconditioned problems, when the objective function has strong variations on the domain boundaries but is very flat in its central region (or conversely), and nonLipschitz objectives, for instance with local discontinuities. High conditioning is typical in “exploratory” optimisation problems, when the parameter space is initially chosen very large. Discontinuities are frequent in computational fluid dynamics problems for instance, where a small change in the parameters results in a change of physics (e.g. laminar to turbulent flow), which creates a discontinuity in the objective.
One remedy to this problem is to add a warping function, either on the output space (Snelson et al., 2004) or on the input space (Snoek et al., 2014; Marmin et al., 2018). However, warping usually applies only to continuous functions, and rely on parametric forms, which need to be chosen beforehand and may not adapt to the problem at hand. A popular alternative is to rely on hierarchical partitions of the input space (assuming stationarity only within each part): see for instance Gramacy and Lee (2008); Fox and Dunson (2012), but those approaches are in general efficient in small dimension and with relatively large datasets.
In this work, we propose to apply an “ordinal” warping to both input and output data, that is, a transformation that only preserves the ordering of the variables. A classical (latent) GP model is then fitted to the transformed dataset. In the output space, this amounts to performing ordinal regression using a variational formulation (Chu and Ghahramani, 2005). In the input space, we show that this amounts to defining a large optimisation problem, which can be solved using standard descent algorithms.
We then study how this model can be used to perform Bayesian optimisation, with minimal use of the original problem metrics. We show that this can be achieved by combining classical acquisition schemes such as upper confidence bound or Thompson sampling and tree search. Although BO has already been applied to problems with qualitative objectives (González et al., 2017), we believe that our approach is the first that is agnostic to any metric in the input and the output spaces.
There are a small number of works characterizing the performance of BO on GPs under optimistic acquisition functions. All these works however consider well behaved GPs where, in particular, the so called information gain is bounded nicely (see Sec. 4 for more detail). In Srinivas et al. (2010), an upper bound on cumulative regret was shown for GPUCB a confidence bound based approach where is an upper bound on information gain. Chowdhury and Gopalan (2017) and Javidi and Shekhar (2018) improved the constants in the regret of confidence based policies. Inspired by these works we characterize the regret performance of the proposed confidence bound based policy in the latent space.
Our approach is illustrated on several toy problems, showing that it is able to optimise severely illconditioned and discontinuous functions.
2 Model
2.1 Definitions and main hypothesis
Ordinal warping for discrete sets
We propose to use as a warping any transformation that preserves the ordering of a finite vector with
real values. Without loss of generality, given a set , we can write such transformation in the form:(1) 
where denotes the rank function, , and are some strictly positive values. It is straightforward that is a bijection, moreover , and choosing results with the rank transformation: .
Latent GP model
Let us assume that we have a set of observations of the form . We define one ordinal warping for , (with an underlying set ) and warpings for each dimension of , (each dimension with an underlying set ). For each , we denote:
(2)  
(3) 
The overall idea is that both and can be mapped respectively through and to and such that:
(4) 
with and a stationary covariance kernel, for instance of the exponential or Matérn classes (Williams and Rasmussen, 2006). The input warping is illustrated in Figure 1.
Intuitively, such an approach allows us to tackle the problem where the user only returns pairwise comparisons, such as “ is smaller than and is larger than ”, which makes it insensitive to scales. By considering a stationary GP for , we report all the modeling difficulty to the warping step. Note that although such warpings may be very difficult to infer over continuous spaces, we only consider here discrete sets as in Equation 1.
2.2 Learning and using variational inference
In the classical GP regression framework, observations ’s are assumed to correspond to evaluations of a latent GP corrupted by Gaussian noise, . By doing so, the likelihood function is set as Gaussian, which allows to apply the classical Bayes’ rule and obtain a posterior distribution on :
As all quantities are Gaussian, the posterior distribution can be expressed in closed form.
In the nonconjugate case, exact computation is not tractable and one must resort to approximations. Variational Inference (VI), which consists in minimising the KullbackLeibler divergence between the approximate and the true posterior, has proven to be an effective approach in this context. We show in the following that applying ordinal warping to the outputs amounts to choosing a nonconjugate likelihood. We then express the corresponding classical VI problem formulation, which amounts to optimising a lower bound on the marginal loglikelihood. Finally, we show how we can incorporate the parameters of the input warping into the VI problem and learn the warping parameters along with the VI ones.
Output ordinal warping using ordinal regression likelihood
We follow the model of Chu and Ghahramani (2005), that specifies bins for each observation. Define , is an arbitrary real value, , , and . Assuming that the observations are in increasing order (), the likelihood functions can be expressed as:
(5) 
with
the standard Gaussian cumulative distribution function. The term
corresponds to a (small) noise in the latent functions.Elbo
We now follow the classical Variational GP (VGP) framework (Titsias, 2009; Hensman et al., 2013). To compute the data likelihood, we only need the marginal posterior distribution of the GP at the (warped) inputs , denoted as . We propose an approximate posterior in which we directly parametrise the distribution of function values at the inputs as a multivariate normal, with mean and covariance , where and are optimisation parameters.
Conditioned on we obtain the approximate posterior GP where the mean and the covariance can be calculated in closed form:
where .
With this approximation in place we can set up our model’s optimisation objective, which is a lower bound on the log marginal likelihood (ELBO, Hoffman et al., 2013), equal to
where and is the ordinal likelihood (5).
In practice, the expectation cannot be evaluated analytically, but as the likelihood factorises over data points this is just a onedimensional integral, which can easily be computed numerically using Gauss–Hermite quadrature.
Optimisation
Now, is optimised with respect to three sets of variables: a) the inducing variables, and ; b) the likelihood parameters, and ; c) the input warping parameters, as each warping is defined using values, but we can set arbitrarily .
Note that since the distances between points are set by the ’s and the amplitude of the response is set through the ’s, we can define
using a stationary kernel with unit variance and lengthscale.
To ease the problem resolution in practice, we restrict to be diagonal and add a set of boundary constraints when solving the ELBO optimisation problem. Each value and are bounded between a (small) strictly positive value and a maximum. As we use a unit kernel variance and lengthscale, we can set those maxima to values related to resp. the amplitude of the GP and such that the covariance between two points distanced by is close to zero). Note that during BO, these bounds can be reduced to bound the maximum variation of
between two consecutive steps, as we detail in Section 4. In our implementation, this problem is solved by stochastic gradient descent, leveraging automatic differentiation tools. The bound constraints are handled using logistic transformations.
3 Bayesian optimisation
3.1 Acquisitions on latent and original spaces
Standard BO algorithms work as follow. An initial set of experiments is generated, typically using a spacefilling design (Pronzato and Müller, 2012) over , and a GP model is trained on this dataset. Then, an acquisition rule is applied repeatedly, that consists of evaluating at the input that maximises an acquisition function. Every time a new data point is acquired, the GP posterior distribution is updated to account for it.
The acquisition function is based on the GP distribution and balances between exploration (high GP variance) and exploitation (low GP mean). Typical acquisitions include Expected improvement (EI, Jones et al., 1998) and upper confidence bound (UCB, Srinivas et al., 2010).
In our case, given that is a stationary GP, it is direct to predict the posterior distribution for a new value . Hence, classical acquisition functions apply and it is straightforward to select a point to acquire.
However, the mapping is only defined from to , and it is not possible to find the that corresponds to without either 1 creating a new ordinal warping , which requires the value of
or 2 generalising the warpings, say by linear interpolation
^{1}^{1}1e.g. if then we choose ., which contradicts our metricfree principle.Instead, we leverage the fact that the ordinal input warping implies a onetoone mapping between hyperrectangle cells, i.e. , which are determined by the rank of with respect to the existing (see Figure 3).
Hence, by choosing we guarantee to have within given bounds, but by being “truly agnostic” with respect to the metrics of the original space, we cannot be more precise about the location of .
Then, instead of using acquisitions functions that return a value for a new point, we need functions that evaluate cells. We show in the following how to adapt two acquisition strategies, LCB and Thompson sampling, to this framework.
3.2 Lower confidence bound
The GPUCB strategy of Srinivas et al. (2010) uses an optimistic upper confidence bound for aiming at a maximization problem. Similarly, we use a lower confidence bound as follows for our minimization problem:
where is a quantity that generally grows with . Assume now that . The LCB (somehow twice optimistic) would become:
(6) 
3.3 Thompson sampling (TS)
The principle of TS is to choose actions in proportion to the probability that they are optimal. For GPs, a simple way to do so is to generate a sample from the posterior of
and pick its minimiser as the next point to evaluate. This requires however to discretise the input space (using a fine Cartesian grid or a low discrepancy sequence).The main difficulty of the vanilla TSGP is the discretisation of the input space. This problem is removed here as the action to take is to choose the best cell. We may choose a cell according to the probability that it maximises the expected reward. This can be achieved by repeatedly 1 sampling one random in each cell , 2 sampling jointly , and 3 recording which cell is optimal. Then, the cell where a new sample is drawn is chosen with probability proportional to the number of times it was optimal.
3.4 Domain decomposition
With points, the ordinal warping naturally defines a decomposition of the search space into cells. Our first strategy is to consider the acquisition values over all of those cells. We call “exhaustive” such approach. Although this works without issue in small dimension () and low data regime (say, ), the number of cells grows very quickly with the number of added observations and dimension, and computing the acquisition value of each cell can rapidly become impractical.
An alternative is to use a hierarchical partitioning, that is, starting from the initial exhaustive decomposition induced by the first points, to only create new cells by dividing the cell where the new observation is generated. With this strategy, only cells are added for every new observation, leading to a total of . We refer to this approach as “tree search”. Both approaches are illustrated in Figure 2.
3.5 Algorithms
The pseudocode of the Thompson sampling algorithm is given in algorithm 1. The LCB algorithm has a simpler but similar structure, as the steps 9:13 are replaced by computing the LCB criterion of eq. 6, and the cell is simply chosen as the one that maximises the LCB. The acquisition steps are illustrated in Figure 3. On both cases, the warping is updated every time a new input point is observed to account for the addition of a new pair , which is done by introducing a new value and a set of values. The variational parameters and are augmented, resp. with one value and with one row and column. Then, the ELBO is trained again, which updates all the parameters listed in Section 2.2.
4 Analysis
Our method is designed agnostic to the metric in the original space. Thus, regret in original space is not welldefined in the scope of the (very general) formulation of this paper. Obtaining results in the original space is not out of reach, but implies making substantially restricting (e.g., Lipschitz) assumptions on the original function, which in some sense defeats the purpose of this work. Hence, we focus on the latent space to show the convergence of the method.
Our analysis is inspired by the analysis of GPUCB in Srinivas et al. (2010). In Srinivas et al. (2010), the observation locations are static, while in our case they vary over time. Our problem thus has an additional difficulty which requires new developments. Specifically, we use a bound on the amount of variation in the location of observations to establish new bounds on information gain that results in regret bounds for a lcb method with particular dynamics of observation points. This may be a valuable contribution for other contexts with dynamic data sets.
The values of observation points () vary in each iteration with injecting new observations as a result of the update of warping parameters described in Sec. 2. Specifically, thus far we have simplified the notation of the value of observation points in the latent space by removing the superscripts corresponding to the iterations. The superscript specifies the number of observations used in determining the warping parameters. Thus, denotes the value of th observation when observations are used in determining the warping parameters. Notice that denotes the location of th observation based on the warping determined by the previous observations, while denotes its location after updating the warping with this nth observation.
The performance measure specified as regret defined as the cumulative loss in compared to its optimum value. Let . Define
(7) 
where is the new observation point at iteration conditioned on the previous observation points . The regret order determines the rate of convergence to optimum value of . Not only a sublinear regret guarantees the convergence to the optimum value, the regret measure also accounts for the intermediate values of the observations and makes sure the overall loss is not too large.
Regret can be bounded in terms of the maximum amount any algorithm could learn about the objective function. Srinivas et al. (2010) excellently characterized this intuition using an information theoretic measure referred to as information gain whose value depends on the observation points and the kernel function. Specifically,
(8) 
where and
is the identity matrix. The regret upper bound is established based on the following two lemmas. In Lemma
1, instantaneous regret at iteration is upper bounded by the variance of the new observation point at iteration up to constants. In Lemma 2, the cumulative value of such variances defined as(9) 
is upper bounded by the upper bound on information gain from observations up to a constant independent of . Combining these two lemmas, Theorem 1 gives the upper bound on the regret of lcb policy.
Our analysis requires that the observation set is compact; in particular, is a bounded subset of . Without loss of generality we assume
(10) 
We further have the following assumption on the smoothness of the kernel (the same as in Srinivas et al. (2010)).
Assumption 1.
For some constants ,
(11) 
Lemma 1.
For some , let . We have, for all ,
(12) 
with probability at least .
Proof. See Appendix A.1.
The cumulative variance of the observed points in all iterations is an indicator of the total reduced uncertainty in the value of the function after observations and is a key parameter in characterizing the regret of lcb. Variation of observations points over iterations however makes it difficult to bound . To account for this variation, the following condition is imposed through the warping step
(13) 
for where for some constant independent of and . In practice, Eq. 13 is enforced by constraining each component of to be close to those of when reoptimising the ELBO.
Lemma 2.
Proof. See Appendix A.2.
See Srinivas et al. (2010) for the detail on the value of for several kernels. For example they show a and a for finite spectrum and Squared Exponential kernels, respectively.
Theorem 1.
Proof of Theorem 1.
We can write the cumulative regret as the sum of the instantaneous regrets and use Lemmas 1 and 2 to obtain
(15)  
(16)  
(18) 
Inequality (15) comes from Lemma 1, (16) is a result of CauchySchwarz inequality, (4) is obtained by the fact that is increasing in and (18) is a direct application of Lemma 2. The theorem holds with .
∎
5 Experiments
As a proof of concept, we consider a set of toy problems: a 1D function (depicted in Figure 1), three 2D functions, one with many discontinuities and two from a classical optimisation benchmarks (Hansen et al., 2016), namely ”bent cigar” and ”different power” (depicted in Figure 6) and a classical 4D function, ”Hartman” (Dixon, 1978). The 1D function has a critical discontinuity at optimum, the ”bent cigar” and ”different power” functions are unimodal but very challenging, due to a high conditioning and a very narrow optimum region. The other 2D function is multimodal, contains many discontinuities in the optimal region and has high conditioning. The 4D function serves as a reference, as vanilla BO is known to perform well on it.
We compare our two algorithms, TS and UCB to a vanilla BO based on expected improvement. For both TS and UCB, we use the treesearch partition; UCB is run with a fixed of . For all methods, an initial set of five experiments is generated by spacefilling design, followed by 20 iterations. Each strategy is run 10 times with different initial conditions. Performance is measured in terms of cumulative regret and reported in Figure 7.
All methods are implemented using gpflow (Matthews et al., 2017), and all GPs use the same Matern3/2 kernel. For the ordinal approach, the optimisation problem 2.2 is solved using Adam (Kingma and Ba, 2015).
We can see that on the challenging functions, while vanilla BO struggles to optimise the function, both of our algorithms significantly outperform BO, in particular during the first steps, despite the difficulty of the problem. Here, LCB performed better than TS. On the classical Hartman function, our approach is only marginally outperformed by standard BO on the classical problem.
Figure 4 shows the state of single run of our approach with LCB (after 8 acquisition points added to an initial set of 5 points). We see that the latent model accounts for the discontinuity by setting a large distance between the points just before and just after it. As a result, our algorithm is capable of exploring the discontinuity region, while a vanilla BO algorithm would have rejected it rapidly. One may also notice that a large region (left) is considerably reduced in the latent space. Although this appears as beneficial on this run, it also indicates that our algorithm might be less global than vanilla BO. Such issue could be addressed by mixing our approach with metricbased strategies, for instance by sampling from time to time inside the largest cell in the original space.
Figure 5 shows a single run of our approach with LCB (with 30 acquisition points). One can observe that most observations form clusters around local and global optima, while some regions are largely ignored.
6 Concluding comments
In this work, we proposed a novel BO approach that does not consider the values of the inputs or outputs, but only their respective ordering. Our algorithm is based on a Variational GP model and a set of ordinal warpings. We showed how such model could be used to refine either strategies of LCB or Thompson sampling. We proved an upper bound on the regret of confidence bound based approached in the warped space, and demonstrated the capability of our algorithm on a challenging toy problem.
Future work may include the analysis of Thompson sampling and a more comprehensive experimental comparison to the existing approaches over a wider range of illconditioned objective functions.
Appendix A.1: Proof of Lemma 1
From assumption 1, for all , with probability at least ,
(19) 
Let . We have that with probability greater than ,
(20) 
Assumption indicates that for all and . Let be a discretisation of with size (with ) at iteration such that, for all
(21) 
where is the closest point to in . This discretisation is possible by uniformly spreading points along each coordinate of .
Replacing and using union bound over all and , we have with probability at least , for all and
(24) 
Now we have all the material needed to prove the lemma. By definition of the acquisition rule:
Appendix A.2: Proof of Lemma 2
The following equation is a direct result of Lemma 5.4 in Srinivas et al. (2010) on :
Summing both sides over from to we get
(28)  
The first term on the right hand side of (28) captures the change in the information gain by the variation of the observation points. The second term is a scaled information gain.
Let and denote the covariance matrices of and , respectively. Since the maximum replacement of from iteration to iteration is , we have
(29) 
Appendix A.3: Experiments test functions and regret curves
References

Chowdhury and Gopalan (2017)
Chowdhury, S. R. and Gopalan, A. (2017).
On kernelized multiarmed bandits.
In
International Conference on Machine Learning
, pages 844–853.  Chu and Ghahramani (2005) Chu, W. and Ghahramani, Z. (2005). Gaussian processes for ordinal regression. Journal of machine learning research, 6(Jul):1019–1041.
 Dixon (1978) Dixon, L. C. W. (1978). The global optimization problem. an introduction. Toward global optimization, 2:1–15.
 Fox and Dunson (2012) Fox, E. and Dunson, D. B. (2012). Multiresolution gaussian processes. In Advances in Neural Information Processing Systems, pages 737–745.
 González et al. (2017) González, J., Dai, Z., Damianou, A., and Lawrence, N. D. (2017). Preferential bayesian optimization. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 1282–1291. JMLR. org.
 Gramacy and Lee (2008) Gramacy, R. B. and Lee, H. K. H. (2008). Bayesian treed gaussian process models with an application to computer modeling. Journal of the American Statistical Association, 103(483):1119–1130.
 Hansen et al. (2016) Hansen, N., Auger, A., Mersmann, O., Tusar, T., and Brockhoff, D. (2016). Coco: A platform for comparing continuous optimizers in a blackbox setting. arXiv preprint arXiv:1603.08785.

Hensman et al. (2013)
Hensman, J., Fusi, N., and Lawrence, N. D. (2013).
Gaussian Processes for Big Data.
Uncertainty in Artificial Intelligence
.  Hoffman et al. (2013) Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochastic Variational Inference. Journal of Machine Learning Research.
 Javidi and Shekhar (2018) Javidi, T. and Shekhar, S. (2018). Gaussian process bandits with adaptive discretization. Electron. J. Statist., 12(2):3829–3874.
 Jones et al. (1998) Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient global optimization of expensive blackbox functions. Journal of Global optimization, 13(4):455–492.
 Kingma and Ba (2015) Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization.
 Marmin et al. (2018) Marmin, S., Ginsbourger, D., Baccou, J., and Liandrat, J. (2018). Warped gaussian processes and derivativebased sequential designs for functions with heterogeneous variations. SIAM/ASA Journal on Uncertainty Quantification, 6(3):991–1018.

Matthews et al. (2017)
Matthews, D. G., Alexander, G., Van Der Wilk, M., Nickson, T., Fujii, K.,
Boukouvalas, A., LeónVillagrá, P., Ghahramani, Z., and Hensman, J.
(2017).
Gpflow: A gaussian process library using tensorflow.
The Journal of Machine Learning Research, 18(1):1299–1304.  Mockus et al. (1978) Mockus, J., Tiesis, V., and Zilinskas, A. (1978). The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117129):2.
 Pronzato and Müller (2012) Pronzato, L. and Müller, W. G. (2012). Design of computer experiments: space filling and beyond. Statistics and Computing, 22(3):681–701.
 Shahriari et al. (2015) Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and De Freitas, N. (2015). Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175.
 Snelson et al. (2004) Snelson, E., Ghahramani, Z., and Rasmussen, C. E. (2004). Warped gaussian processes. In Advances in neural information processing systems, pages 337–344.
 Snoek et al. (2014) Snoek, J., Swersky, K., Zemel, R., and Adams, R. (2014). Input warping for bayesian optimization of nonstationary functions. In International Conference on Machine Learning, pages 1674–1682.
 Srinivas et al. (2010) Srinivas, N., Krause, A., Kakade, S., and Seeger, M. (2010). Gaussian process optimization in the bandit setting: no regret and experimental design. In Proceedings of the 27th International Conference on International Conference on Machine Learning, pages 1015–1022. Omnipress.
 Titsias (2009) Titsias, M. (2009). Variational Learning of Inducing Variables in Sparse Gaussian Processes. Artificial Intelligence and Statistics.
 Williams and Rasmussen (2006) Williams, C. K. and Rasmussen, C. E. (2006). Gaussian processes for machine learning. MIT press Cambridge, MA.
References

Chowdhury and Gopalan (2017)
Chowdhury, S. R. and Gopalan, A. (2017).
On kernelized multiarmed bandits.
In
International Conference on Machine Learning
, pages 844–853.  Chu and Ghahramani (2005) Chu, W. and Ghahramani, Z. (2005). Gaussian processes for ordinal regression. Journal of machine learning research, 6(Jul):1019–1041.
 Dixon (1978) Dixon, L. C. W. (1978). The global optimization problem. an introduction. Toward global optimization, 2:1–15.
 Fox and Dunson (2012) Fox, E. and Dunson, D. B. (2012). Multiresolution gaussian processes. In Advances in Neural Information Processing Systems, pages 737–745.
 González et al. (2017) González, J., Dai, Z., Damianou, A., and Lawrence, N. D. (2017). Preferential bayesian optimization. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 1282–1291. JMLR. org.
 Gramacy and Lee (2008) Gramacy, R. B. and Lee, H. K. H. (2008). Bayesian treed gaussian process models with an application to computer modeling. Journal of the American Statistical Association, 103(483):1119–1130.
 Hansen et al. (2016) Hansen, N., Auger, A., Mersmann, O., Tusar, T., and Brockhoff, D. (2016). Coco: A platform for comparing continuous optimizers in a blackbox setting. arXiv preprint arXiv:1603.08785.

Hensman et al. (2013)
Hensman, J., Fusi, N., and Lawrence, N. D. (2013).
Gaussian Processes for Big Data.
Uncertainty in Artificial Intelligence
.  Hoffman et al. (2013) Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochastic Variational Inference. Journal of Machine Learning Research.
 Javidi and Shekhar (2018) Javidi, T. and Shekhar, S. (2018). Gaussian process bandits with adaptive discretization. Electron. J. Statist., 12(2):3829–3874.
 Jones et al. (1998) Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient global optimization of expensive blackbox functions. Journal of Global optimization, 13(4):455–492.
 Kingma and Ba (2015) Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization.
 Marmin et al. (2018) Marmin, S., Ginsbourger, D., Baccou, J., and Liandrat, J. (2018). Warped gaussian processes and derivativebased sequential designs for functions with heterogeneous variations. SIAM/ASA Journal on Uncertainty Quantification, 6(3):991–1018.

Matthews et al. (2017)
Matthews, D. G., Alexander, G., Van Der Wilk, M., Nickson, T., Fujii, K.,
Boukouvalas, A., LeónVillagrá, P., Ghahramani, Z., and Hensman, J.
(2017).
Gpflow: A gaussian process library using tensorflow.
The Journal of Machine Learning Research, 18(1):1299–1304.  Mockus et al. (1978) Mockus, J., Tiesis, V., and Zilinskas, A. (1978). The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117129):2.
 Pronzato and Müller (2012) Pronzato, L. and Müller, W. G. (2012). Design of computer experiments: space filling and beyond. Statistics and Computing, 22(3):681–701.
 Shahriari et al. (2015) Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and De Freitas, N. (2015). Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175.
 Snelson et al. (2004) Snelson, E., Ghahramani, Z., and Rasmussen, C. E. (2004). Warped gaussian processes. In Advances in neural information processing systems, pages 337–344.
 Snoek et al. (2014) Snoek, J., Swersky, K., Zemel, R., and Adams, R. (2014). Input warping for bayesian optimization of nonstationary functions. In International Conference on Machine Learning, pages 1674–1682.
 Srinivas et al. (2010) Srinivas, N., Krause, A., Kakade, S., and Seeger, M. (2010). Gaussian process optimization in the bandit setting: no regret and experimental design. In Proceedings of the 27th International Conference on International Conference on Machine Learning, pages 1015–1022. Omnipress.
 Titsias (2009) Titsias, M. (2009). Variational Learning of Inducing Variables in Sparse Gaussian Processes. Artificial Intelligence and Statistics.
 Williams and Rasmussen (2006) Williams, C. K. and Rasmussen, C. E. (2006). Gaussian processes for machine learning. MIT press Cambridge, MA.
Comments
There are no comments yet.