Graduated Optimization of Black-Box Functions

06/04/2019 ∙ by Weijia Shao, et al. ∙ Berlin Institute of Technology (Technische Universität Berlin) 0

Motivated by the problem of tuning hyperparameters in machine learning, we present a new approach for gradually and adaptively optimizing an unknown function using estimated gradients. We validate the empirical performance of the proposed idea on both low and high dimensional problems. The experimental results demonstrate the advantages of our approach for tuning high dimensional hyperparameters in machine learning.



There are no comments yet.


page 1

page 2

page 3

page 4

Code Repositories


Benchmark for global optimization algorithms on various problems

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning applications and the design of complex systems usually involve a large number of free parameters. The evaluation of a single set of parameters requires computationally expensive numerical simulations and cross-validations, while the choices of parameters influence the performance of a system dramatically. In the machine learning community, this problem is usually referred to as hyperparameter optimization (HPO)(Hutter et al., 2011) and has been extensively studied in recent years, since the early approaches based on grid search become impractical for high dimensional hyperparameters (Franceschi et al., 2017).

In this paper, we focus on continuous parameters, for which gradient based methods (Maclaurin et al., 2015; Luketina et al., 2016; Pedregosa, 2016; Franceschi et al., 2017; Wu et al., 2017) have attracted attention for its fast convergence. In machine learning applications, the objective of HPO is usually to optimize a validation function evaluated at a stationary point of the training objective, and the gradient of the validation function can be derived from the iterative training procedure (Maclaurin et al., 2015; Luketina et al., 2016). The exact computation of the gradient is the major bottleneck, since it is computationally inefficient and has high space requirements. Pedregosa (2016) proposed the idea of approximating the gradient based on the stationary condition of the training procedure, and managed to efficiently optimize a large number of hyperparameters.

However, there are still several issues to be addressed. First of all, the gradient approximation proposed by Pedregosa (2016) relies on regularity conditions such as the stationary condition for a minimizer of the training loss, which obviously does not hold if we apply early stopping. Furthermore, the approximation is based on the assumption of the smoothness of the objective function, which is too strong in practice. Finally, the algorithms proposed by Pedregosa (2016), Maclaurin et al. (2015) and Franceschi et al. (2017) require hyperparameters such as learning rate, which become hyper hyperparameters in HPO. In their experiments, those hyperparameters are manually adjusted. In practice, devising procedures adaptive to them are needed.

In this paper, we formalize HPO as a problem of optimizing the output value of an unknown function. We propose an alternate idea based on the two-point estimation of the gradient (Nesterov and Spokoiny, 2017), the graduated optimization (Hazan et al., 2016) and the scale free online learning algorithm (Orabona and Pál, 2018). Compared to (Pedregosa, 2016), we do not assume the smoothness or any regularity conditions of the objective function. To avoid introducing further hyper hyperparameters, we apply a simple online gradient descent with an adaptive learning rate. We compare our algorithms against the state-of-the art global optimization algorithms on machine learning problems. The rest of the paper is organized as follows. In section 2, we introduce the problem setting, describe our idea of estimating the gradient and propose the algorithm. In section 3, we present the empirical performance of our algorithm. Section 4 concludes our work with some future research directions.

2 Problem Setting and Algorithm

Let be a function defined on a compact and convex set . Finding the global minimum is challenging in general due to non-convexity, unknown smoothness and possible noisy evaluations of the function. In the context of machine learning, returns the score, such as the cross-validation error, for a given configuration of hyperparameters selected from . We follow the standard procedure from the literature on global optimization (Hutter et al., 2011; Bubeck et al., 2011; Munos, 2011; Malherbe and Vayatis, 2017), which attempts to minimize the function by sequentially exploring the space using a finite budget of evaluations. Formally, we wish to find a sequence , where each point depends on the previous evaluations , such that the last explored point returns a lowest possible value.

The global optimization methods do not or cannot usually leverage the gradient information, which is actually proven to be useful for tuning hyperparameters in machine learning (Maclaurin et al., 2015; Pedregosa, 2016; Franceschi et al., 2017). However, deriving the gradients in those works is expensive, require strong assumptions on , and is not applicable to black-box problems. In contrast, our idea works for more general cases. Assuming is -Lipschitz, its Gaussian approximation (Nesterov and Spokoiny, 2017) is defined as


is a standard Gaussian random vector.

is -smooth (Nesterov and Spokoiny, 2017) with bounded bias (Hazan et al., 2016). The gradient can be estimated using the two-point feedback (Nesterov and Spokoiny, 2017)

Arguably, one can estimate the gradient with only one expensive function call (Hazan et al., 2016)

or use other two-point estimators with lower variance

(Shamir, 2017). Our choice is more practical, since the evaluation at each could help us trace the best configuration of the parameters evaluated.

1:input: budget ,
2:initialise , , , such that
3:for  to  do
4:     for  to  do
5:         Sample from
7:         for  to  do scale free online gradient descent
9:               simply do nothing if learning rate is
10:         end for
11:          project it to the decision space
13:     end for
15:end for
Algorithm 1 GradOpt

Algorithm 1 describes our idea. We can divide it into epochs. In each epoch, we use online gradient descent (Orabona and Pál, 2018) with adaptive learning rate to optimise . If the smoothed functions are locally strongly convex and the global optima of smoothed functions of the successive epochs are close, a point close to the global optimum can be ensured (Hazan et al., 2016) for a large enough budget. Otherwise it converges to a stationary point of as we gradually decrease (Nesterov and Spokoiny, 2017). Compared to the standard gradient descent method used in (Pedregosa, 2016), our algorithm does not assume the -smoothness of and derive the learning rate from , which is unknown for most cases. Unlike those methods employed in (Maclaurin et al., 2015; Franceschi et al., 2017), we do not use momentum term in the gradient descent to avoid additional hyperparameters.

3 Evaluation

In this section, we compare the empirical performance of GradOpt with the following global optimisation algorithms:

  • PRS. The Pure Random Search methods samples parameters uniformly randomly from the searching space.

  • HOO. The tree based global optimisation methods for Hölder continuous functions (Bubeck et al., 2011).

  • AdaLipo. The sequential strategy for optimising unknown Lipschitz continuous functions while adaptively estimating the Lipschitz constant (Malherbe and Vayatis, 2017).

  • AdaLipoTR. The practical method combining AdaLipo (Malherbe and Vayatis, 2017) for global search, and trust region for finding the local results 111The implementation of both AdaLipo and AdaLipoTR is taken from the dlib library (

We adopt the experimental setting of (Malherbe et al., 2016; Malherbe and Vayatis, 2017) and consider the problems of tuning both low dimensional and high dimensional hyperparameters in machine learning. For the low dimensional case, we tune the

-regularizer and the width of a Gaussian kernel ridge regression. The objective is to maximize the 10-fold cross validation score. More specifically, we split the dataset

into 10 folds and consider the following objective function


subject to

where denotes the Gaussian RKHS of bandwidth equipped with the norm . The goal is to search for the optimal and from .

To compare the performance for high dimensional hyperparameters, we consider the task of data cleaning for kernel ridge regression, for which we assign a weight from to each data sample. Then we tune the hyperparameters and weights, i.e. to maximize (1) subject to

The decision space for , and for is . Note that, for the convenience of applying HOO and AdaLipo, all the problems are implemented as maximization problems. To apply GradOpt we simply minimise the negative of the objective function.

For each of the problems, we perform runs of the algorithms with a budget of function evaluations. Then for each target value in , we observe the number of function evaluations required to reach the best score found by the algorithms multiplied by the target value.

max width= 90% Target Housing Yacht Slump BreastCancer AutoMPG PRS HOO AdaLipo AdaLipoTR GradOpt
max width= 95% Target Housing Yacht Slump BreastCancer AutoMPG PRS HOO AdaLipo AdaLipoTR GradOpt
max width= 99% Target Housing Yacht Slump BreastCancer AutoMPG PRS HOO AdaLipo AdaLipoTR GradOpt

Table 1: Results of the numerical experiments on hyperparameter tuning for Gaussian kernel ridge regression

max width= 90% Target HousingHD YachtHD SlumpHD BreastCancerHD AutoMPGHD PRS HOO AdaLipo AdaLipoTR GradOpt
max width= 95% Target HousingHD YachtHD SlumpHD BreastCancerHD AutoMPGHD PRS HOO AdaLipo AdaLipoTR GradOpt
max width= 99% Target HousingHD YachtHD SlumpHD BreastCancerHD AutoMPGHD PRS HOO AdaLipo AdaLipoTR GradOpt

Table 2: Results of the numerical experiments on high dimensional hyperparameter tuning for Gaussian kernel ridge regression (additionally tuning the training weights)

Table 1 and 2 demonstrate the experimental results.222The source code can be fetched from Despite the potentially suboptimal stationary points, GradOpt returns a point close to the optimal solutions found by other global optimisation algorithms in all experiments. For the low dimensional problems, the combined approach AdaLipoTR outperforms the other algorithms. However, GradOpt takes only a few more steps compared to AdaLipoTR and outperforms the algorithms relying on global search for all datasets except Yacht. The experimental results for the high dimensional tasks demonstrate the advantage of GradOpt, which obtains most of the best scores for all target values. The other global optimization algorithms don’t scale well for the high dimensional problems, which is also suggested by their theoretical analysis (Bubeck et al., 2011; Malherbe and Vayatis, 2017).

4 Discussion and Future Work

We presented an alternative approach for optimizing an unknown, potentially non-convex and non-smoothed function, which is based on the estimated gradient, adaptive learning rate and graduated optimization. Suggested by the theoretical analysis of the previous work, our approach converges to a stationary point for general cases and to a global optimum if certain conditions are fulfilled. The experimental results have shown that our approach indeed provides global guarantee. For tuning high dimensional hyperparameters, it outperforms the state-of-the-art global optimization algorithms in most of the experiments.

We consider this work as a glimpse of applying graduated optimization to searching for optima of unknown functions. It can be extended and improved in several ways. Firstly, the convergence of our approach is suggested by previous work and empirically shown, yet the actual theoretical performance is unknown. The most important future direction would be to perform a theoretical analysis in an appropriate framework. Furthermore, we assign an equal budget of evaluations to each epoch in this work, which may not be best option. A strategy of allocating budget with theoretical guarantee would be needed in practice. Finally, the experimental results show that the combined method outperforms the rest for the low dimensional problems. To apply our approach to tuning hyperparameters in machine learning, we can also combine it with global optimization algorithms. However, this must be thoroughly evaluated on diverse datasets with different models.

This work is supported in part by the German Federal Ministry of Education and Research (BMBF) under the grant number 01IS16046. We would like to thank Dr. Brijnesh Jain for his valuable feedback.


  • Bubeck et al. (2011) Sébastien Bubeck, Rémi Munos, Gilles Stoltz, and Csaba Szepesvári. X-armed bandits. Journal of Machine Learning Research, 12(May):1655–1695, 2011.
  • Franceschi et al. (2017) Luca Franceschi, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. Forward and reverse gradient-based hyperparameter optimization. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1165–1173. JMLR. org, 2017.
  • Hazan et al. (2016) Elad Hazan, Kfir Yehuda Levy, and Shai Shalev-Shwartz. On graduated optimization for stochastic non-convex problems. In International conference on machine learning, pages 1833–1841, 2016.
  • Hutter et al. (2011) Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In International Conference on Learning and Intelligent Optimization, pages 507–523. Springer, 2011.
  • Jamil and Yang (2013) Momin Jamil and Xin-She Yang. A literature survey of benchmark functions for global optimisation problems. Int. J. Mathematical Modelling and Numerical Optimisation, 4(2), 2013.
  • Luketina et al. (2016) Jelena Luketina, Mathias Berglund, Klaus Greff, and Tapani Raiko. Scalable gradient-based tuning of continuous regularization hyperparameters. In International Conference on Machine Learning, pages 2952–2960, 2016.
  • Maclaurin et al. (2015) Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In International Conference on Machine Learning, pages 2113–2122, 2015.
  • Malherbe and Vayatis (2017) Cédric Malherbe and Nicolas Vayatis. Global optimization of lipschitz functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2314–2323. JMLR. org, 2017.
  • Malherbe et al. (2016) Cedric Malherbe, Emile Contal, and Nicolas Vayatis. A ranking approach to global optimization. In International Conference on Machine Learning, pages 1539–1547, 2016.
  • Munos (2011) Rémi Munos. Optimistic optimization of a deterministic function without the knowledge of its smoothness. In Advances in neural information processing systems, pages 783–791, 2011.
  • Nesterov and Spokoiny (2017) Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527–566, 2017.
  • Orabona and Pál (2018) Francesco Orabona and Dávid Pál. Scale-free online learning. Theoretical Computer Science, 716:50–69, 2018.
  • Pedregosa (2016) Fabian Pedregosa. Hyperparameter optimization with approximate gradient. In Proceedings of the 33rd International Conference on Machine Learning, 2016.
  • Shamir (2017) Ohad Shamir. An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. Journal of Machine Learning Research, 18(52):1–11, 2017.
  • Wu et al. (2017) Jian Wu, Matthias Poloczek, Andrew G Wilson, and Peter Frazier. Bayesian optimization with gradients. In Advances in Neural Information Processing Systems, pages 5267–5278, 2017.