Bayesian optimization has been successfully applied to many global optimization problems (Jones et al., 1998; Martinez-Cantin et al., 2007; Hutter et al., 2011; Snoek et al., 2012). Typically, it makes few assumptions about the objective function, treating it as a black box. When prior knowledge is available, however, it might be possible to improve the efficiency of the optimization search. In particular function monotonicity has been successfully exploited to improve statistical modeling, (e.g., Golchi et al., 2015) for analysis of computer experiments. The methods proposed here employ such monotonicity information for problems motivated by machine learning (ML), where the performance of an ML algorithm model is complex with respect its hyperparameters, which have to be tuned. We propose a sequential method that adapts the Bayesian optimization framework for an objective function that can be decomposed into a sum of functions with monotonicity constraints and exploit that structure. We analyze the method’s applicability to ML hyperparameter problems and provide positive experimental results.
Our algorithm incorporates monotonicity information in the Gaussian process (GP) model underlying Bayesian optimization. Bayesian optimization sequentially adds new objective function evaluations based on a probabilistic emulation of the objective trained using all current evaluations. At each iteration the emulator guides the choice of the next evaluation in a way that balances global and local search. A common choice for the statistical emulator is GP regression. In our work, instead of modeling the objective function directly by a GP, we decompose the function into components emulated by approximately monotonic GPs.
The rest of this paper is organized in the following way. In section 2 we provide a brief overview of Bayesian optimization and GPs with monotonicity constraints, and discuss a related work on shape constrained hyperparameter optimization. Section 3 illustrates the algorithm’s basic ideas of objective decomposition and monotonicity with a simple example. In section 4 we argue that these ideas can be applied to ML hyperparameter tuning, and compare the proposed algorithm with a standard Bayesian optimization approach. Finally, in section 5 we make some concluding remarks and discuss future work.
2 Related Works
2.1 Bayesian Optimization
Bayesian optimization is a sequential model based algorithm. At each iteration it builds a probabilistic model of the objective function based on previous evaluations, and chooses the location of the next function evaluation based on the model. Unlike local search strategies, it tends to use more (or all for black-box problems) available information to determine the next function evaluation. This makes Bayesian optimization particular useful when few objective function evaluations are practical, without knowledge of the properties (e.g. convexity) of the objective function.
Algorithm 1 describes a typical procedure for Bayesian optimization. The algorithm takes the region of interest , the objective function , a set
of vectors from, a model fitting function , and an acquisition function as input, and outputs a list of objective function evaluations. First it initializes objective function evaluation set at . At each iterative step it builds a model based on , finds the next evaluation point that optimizes , and append to . Common choices of 2010). In this paper we focus on GP regression for fitting probabilistic models, and adapt expected improvement acquisition function.
2.2 Gaussian Processes with Monotonicity Constraints
As discussed in section 2.1, GP regression builds a probabilistic model of a function based on its point evaluations. It is a robust model that can be applied to a large family of functions including non smooth, non continuous, or discrete functions. In particular, it assumes the target function is sampled from a GP, fits parameters that are used to describe the GP, and applies conditional distribution for inference. For more details of GP regression see Rasmussen & Williams (2006).
When priori knowledge of the target function is available, performance of the model can be improved. In this paper, we study functions that can be decomposed as a sum of monotonic functions, and model the decomposed functions separately. There are various approaches to GP modeling subject to (approximate) constraints (Riihimäki & Vehtari, 2010; Wang & Berger, 2016; Lin & Dunson, 2014). We employ Riihimäki & Vehtari (2010)’s implementation because it is readily accessible. Alternative methods will be discussed in section 5.
Riihimäki & Vehtari (2010) enforce monotonicity by introducing virtual derivative observations. They introduce a set of virtual points , and assume positive (or negative) derivatives are observed at these points. For definiteness, take a target function that increases with respect to . Conditional on the true derivative at point in the
direction, the prior probability of the eventthat the observed (with noise) derivative is positive equals
whereis a constant scalar. For the experiments of this paper, we set . For reasonably smooth GPs, to incorporate monotonicity information, it is sufficient to add positive (or negative) derivative observations at a modest number of virtual points.
With this prior probability, and knowing the fact that the derivatives of a GP is also a GP, a conditional GP can be approximated by expectation propagation. Therefore parameters can be learned using maximum a posterior estimate, and inference are performed as conditional density estimation.
2.3 Bayesian Optimization with Shape Constraints
Another work has been studied that enforce shape constraint on the objective function for Bayesian optimization (Jauch & Peña, 2016). Instead of decomposing the objective function into sum of monotonic functions, Jauch & Peña (2016)
assuming constrains include monotonicity, convexity and quasiconvexity directly to the objective function. We believe these constrains are inappropriate for the following reasons. For monotonic objective functions, the region of interests can be reduced, and monotonicity is useless in the reduced region of interests. For convex objective functions, there exists more sufficient algorithms for optimization. For quasiconvex functions, the paper constrains the objective function by enforce different monotonicities on each side of critical points. However it does not illustrate how to find the critical points, and given critical points the optimization problem is reduced to much easier problem. For all the three constraints, the restricted objective function are too limited. To the best of our knowledge, for ML hyperparameter tuning problems, such properties of the objective function does not exist. On experimental side, their results demonstrate that the best predicated mean and variance of the objective function are better than those predicted by standard method. However we do not see a result on evaluated objective function value.
3 Problem Formulation and Proposed Algorithm
In this section we define the problem of interests and describe our algorithm rigorously. An example of applying our algorithm on an artificially designed problem is provided.
3.1 Problem Definition and Algorithm Description
Our focus is an objective function that can be written as a sum of functions,
with monotonicity constraints
for some and all feasible and . Whether the function is monotonically non-decreasing or non-increasing with respect to must be known. Our algorithm needs to observe each function at chosen values in the domain. We assume such observations are contaminated by Gaussian noises.
Our algorithm works in the same way as the Bayesian optimization framework described in section 2.1, with adaptation of the modeling step. Instead of modeling directly, we model each separately using independent monotonicity-constrained GP regressions. The predictive mean of at any trial is then the sum of the predictive means of the , and because of the assumed independence of the component GPs, the predictive variance of the sum is similarly the sum of the variances. Once the predictive mean and variance of has been computed in this way, the expected improvement of a new evaluation at can be computed in the standard way.
For all the experiments presented, each component GP is assumed to have zero mean, constant variance, and isotropic squared-exponential correlation function, as well as a monotonicity constraint if known. The GP also assumes a Gaussian-noised oracle. The search starts from four initial evaluations since there are three parameters to be estimated for each GP regression: the variance of the correlated part of the GP, the scale parameter of the squared exponential correlation function, and the variance of the independent noise.
3.2 Illustrative Example
A simple example will demonstrate the proposed algorithm and show that it can be more efficient than a standard Bayesian optimization approach. To examine the roles of function decomposition and monotonicity, we also compare with a modification of our algorithm without monotonicity modeling. Thus, the methods compared are referred to as (1) standard, (2) decomposition and monotonicity (the proposed algorithm), and (3) decomposition without monotonicity.
The illustrative example is to maximize
where. The basis functions are centered at . Without changing the problem by adding a constant, consider the objective function
One can show that is the sum of two monotonic functions and , where
The objective function and its decomposition are shown in Figure 1.
Figure 5 shows the result of applying the three algorithms 10 times from four random initial objective function evaluations and adding a further eight evaluations. For this problem, we employ a uniform grid of 10 uniform virtual points. We see that modeling decomposed objective functions separately improves the performance. However there is little extra benefit from the monotonicity adaptation here.
4 Application to Machine Learning Hyperparameter Tuning
A large class of ML algorithms can be formalized as having three components: (1) a hypothesis set, (2) a loss function (usually highly related to a pre-defined performance measure), and (3) an optimization of the loss function over the hypothesis set with respect to the training set. The ideal utility of a ML algorithm is usually defined as generalization error, which is the expected performance measure of the chosen hypothesis over the underlying distribution. Hyperparameters in ML are often to describe the hypothesis set and loss function.
The generalization error in a real problem is usually inaccessible. Instead, the validation error is often used as an approximation for the purpose of hyperparameter tuning. Hence the goal of hyperparameter tuning in ML is to optimize the hyperparameters with respect to validation error. Rather than optimizing validation error directly, we propose to optimize the training error plus the difference between validation error and training error. We claim that these component functions are reasonably assumed to be monotonic with respect to ML model complexity and hence the hyperparameters.
For many ML algorithms, the training error (i.e., the performance measure of the chosen hypothesis on the training set) is monotonically decreasing as the hypothesis set increases. The training error is also monotonic with respect to regularizing parameters, which are also part of the loss function, in many cases. Hence monotonicity of training error with respect to hyperparameters widely exists.
Analysis of monotonicity of the difference given by validation error minus training error is much harder. To the best of our knowledge, there is no general result. Even monotonicity in expectation does not exist. However the idea that the generalizability (inverse of the difference between generalization and training error) is decreasing with respect to effective model complexity is accepted in the ML community. That view leads to a heuristic argument that assuming monotonicity of validation error minus training error is reasonable.
Consider an ML algorithm that learns a mapping from to . Let the training set and be identical independent (i.i.d.) samples from distribution . Denote the learned mapping as . Fix a performance measure . The training error and generalization error are defined as
The generalization error can be rewritten as
Notice that the training error and generalization error are of the same form: both involve means for , evaluated at datasets and , respectively. Since the two datasets are i.i.d. samples from the same distribution and is optimized for data set , the training error should be more sensitive to the model complexity compared to thegeneralization error. Therefore monotonicity of the difference between generalization error and training error is plausible.
In manual ML hyperparameter tuning, experts implicitly use the same approach. They tune the hyperparameters using monotonicity assumptions and relativities between target error, training error, and validation error minus training error (Goodfellow et al., 2016, pp. 424, 425).
4.1 Elastic Net Regularized Linear Regression
Consider the elastic net regularized linear regression algorithm. Letand be training sets. The elastic net regularized linear regression tries to find a linear predictor with sparse coefficients; nonzero coefficients are penalized to be smaller in magnitude. It optimizes over all linear functionals a combination of goodness of fit, the 2-norm of the coefficients, and the 1-norm of the coefficients. Specifically, it finds to minimize the loss
where . We adopt the common performance measure mean square error scaled by for consistency
Note that the loss function consists of two parts: the first is the mean square error on the training set, which measures the goodness of fit, while the second, , is a regularization term. In the regularization the 1-norm forces sparsity of , and the 2-norm constrains the magnitude of . The regularization term can be considered as representing prior knowledge of the underlining function and can prevent over fitting for many cases. For more details of elastic net regression, one may refer to Zou & Hastie (2005).
Given the validation sets and , the goal of hyperparameter tuning for this problem is to minimize with respect to and , where .
In the following experiment, all elements of and
are sampled independently from the standard normal distribution, with 200 examples in each set and 100 features for each example. We define the underlining linear transformation by avector with 50 zero elements; the remaining 50 nonzero elements are drawn from a normal distribution with mean zero and standard deviation 0.22. Data and follow independent normal distributions with means and , respectively, and standard deviation 1. The parameters for generating the data are chosen to make the optimization problem nontrivial. The hyperparameter ranges of interest are , and . Figure (c)c shows the validation error as a function of and . Since we describe Bayesian optimization for maximization and validation error is to be minimized, negative error is shown.
As we described in the general case, to optimize the validation error, we decompose it as the sum of the training error and the difference between validation error and training error. From Figures (a)a and (b)b we see that the two decomposed functions are (at least roughly) monotonic, which satisfies our assumptions.
Again, we compare three algorithms: standard, decomposition and monotonicity, and decomposition without monotonicity. Each is applied 10 times, starting from different initial function evaluations at four random points. A grid of virtual points is used in monotonicity modeling. From the experiment we observe that the standard and non-monotonic algorithms start by evaluating the functions at the corners, whereas the monotonicity modeling algorithm does not always do so. This is reasonable since the monotonicity modeling algorithm has more prior knowledge of the objective function.
Figure 14 shows the results from the three algorithms for the 10 trials. Since the objective function value is hard to distinguish near optimal values, we measure the performance of an algorithm as its rank among 100 evaluations at random points. Figure (a)a, (b)b and (c)c show the best ranking at each iteration for the three algorithms. We see that the performance of the algorithm employing decomposition and monotonicity dominates the other two statistically. For many trials of the standard and decomposition-only algorithms, 12 objective function evaluations do not lead to significant improvement. This indicates that these algorithms require more evaluations to learn an accurate model of the objective function. In contrast, the algorithm exploiting decomposition and monotonicity makes significant improvement in most of the trials. Figure (d)d plots the mean square of the rankings in (a)a, (b)b and (c)c over 10 trials at each iteration. The mean square of a set of real valued samples equals the sum of the squared mean and the variance of the samples. Therefore a smaller mean square value implies a smaller mean on average, and/or less variation in rankings, which is what we desire for optimization.
4.2 Random Forest on Adult Dataset
In this subsection, we apply and compare the three algorithms when tuning random forest hyperparameters for the Adult dataset. The Adult dataset contains 32561 training examples. For each instance, there are 14 features describing a person and one binary variable that indicates a salary greater than 50K for classification(Lichman, 2013). Random forest classification has been successfully applied to this dataset (Fan et al., 2005).
Random forest classification is an ensemble method that operates by constructing a multitude of decision trees at training time. To classify a new instance, it outputs the class that is the mode of the classes predicted by the individual decision trees (majority vote). To balance the goodness of fit and generalizability of each decision tree, we control the model complexity by tuning three hyperparameters: maximum split nodes, minimum leaf size, and number of features to use at each split. In this experiment, we adapt theTreeBagger implementation of random forest in the MATLAB Statistics and Machine Learning Toolbox . We generate random forests with 10 decision trees and set other tuning parameters at default values. The training sets and validation sets comprise and of the dataset, respectively. Classification error is used for hypothesis performance measurement. The hyperparameter optimization problem is to find the best such that
is minimized, where is the random forest generated with maximum split nodes equals , minimum leaf size equals and number of features to use at each split equals . The integer hyperparameters here and in the next experiment are rounded to the nearest integers.
We apply the three algorithms to this problem. Virtual points in our algorithm are on a grid. Figure 19 shows experimental result of running the three algorithms 10 times with four initial objective function evaluations. The performance of result of distributions at termination are not clearly distinguishable. However by looking at both the validation error and the mean square measure we see that our algorithm improves faster, and always dominates other methods.
4.3 Convolutional Neural Network on CIFAR-10 Dataset
Deep learning has enjoyed wide success in many areas in the past decade. One of the challenging problems to apply deep learning algorithms is to select the relatively large number of hyperparameters. Some of them define the effective model complexity, and it is reasonable to assume monotonicity as discussed in general. In this work, we tune three hyperparameters: the regularizer , the regularizer , and an integer2009). A systemic study of CNN image classification and its application to this data set can be found in Krizhevsky et al. (2012).
We construct CNNs with one input layer, two convolutional layers, a fully connected layer, and a softmax output layer. The two convolutional layers consist of andneurons. Each CNN is trained by stochastic gradient decent on and regularized cross entropy with constant learning rate , decay factor , momentum
and number of epochs equals to 30. Since training the whole data set takes substantial time, for demonstration purposes we randomly select 2000 examples as a training set. We use cross entropy as the performer measure on the validation set.
Figure 24 shows the results from applying the three algorithms to the CNN problem. Optimization settings are identical to those for the random forest problem. We see that the performance of the algorithm imposing monotonicity dominates the other two statistically. All trials of the standard algorithm fail to achieve an objective function below 1.7. Decomposing the objective function improves performance, and also encoding monotonicity information gives the highest rate of objective values below 1.7. We also observe that the algorithm exploiting decomposition and monotonicity in modeling makes improvement earlier compared to the other two algorithms.
5 Discussion and Future Works
This work is inspired by manual strategies for hyperparameter tuning of an ML algorithm. That strategy adjusts the hyperparameters for more model complexity when observing training error greater than target error, or reduces complexity if training error is reasonably small but validation error is not satisfied. One can do this because of prior knowledge that training error and the generalizability are both decreasing with respect to algorithm model complexity. The strategy implicitly decomposes the validation error into training error plus training error minus validation error (generalizability is the inverse of this difference). Our algorithm modifies the modeling part of Bayesian optimization to exploit this prior knowledge and the decomposition. The hope is that the decomposition of validation error simplifies the tuning problem by creating two functions that are easier to model, i.e., less nonlinear for example. Moreover, approximate monotonicity provides further useful information for modeling and optimization. The experiments indicate that for ML hyperparameter tuning monotonicity information provides more improvement.
There exist two approaches for monotonic GP regression. Riihimäki & Vehtari (2010) and Wang & Berger (2016) enforced monotonicity by inserting virtual observations of derivative processes, and Lin & Dunson (2014) projected GPs onto a constrained space. An important question is where to insert the virtual points. The trivial method that we employed using a grid results in having exponentially many virtual points with respect the objective function’s domain dimension. Training a GP requires computation with a large constant, however, where is the number of observations. With limited computational power, methods that can assign virtual points as requested to fill the space more sparsely is more preferred, for example, Latin hypercube sampling (McKay et al., 1979). A domain-specific adaptive method was proposed by Wang & Berger (2016). It sequentially determines one virtual point at a time at the location where the derivative process has the lowest (or highest) value. However this method itself requires repeating the training process and optimization of the learned process a linear number of times with respect to requested number of virtual points. Thus, it is more computationally expensive that non-adaptive methods.
Lin & Dunson (2014)’s method first learns the posterior process without monotonicity information. Then it samples functions from the posterior process, and projects them into the monotonicity constrained space. The projection is defined to minimize an integral of square difference. It is not clear how samples from the posterior process are represented, but we believe a (possibly approximate) closed form projection can be derived with compact sample representation. If a closed formed solution is available, the monotonic GP regression required by our algorithm should share the same order of computation as standard GP regression.
For ML hyperparameter tuning problems, since the monotonicity of validation error minus training error is not guaranteed even in expectation, one may wonder if imposing a monotonicity constraint is appropriate. Here we make an argument that for an objective function that decompose to two functions with one of them is known to be monotonic, there is an extra reason to model the other function as monotonic beside the behavior of the function itself. Denote the decomposed functions by and , with known to be monotonic. Consider the simple case where the domain of the objective function has dimension one. Without loss of generality assume to be monotonically increasing. Then for the interval where is increasing, for all . Hence a region where is not monotonically decreasing is not of interest. However, since we do not know in what region is increasing, enforcing to be monotonic may have a negative effect on accuracy of prediction when is in its decreasing region. This argument can be easily generalized to higher dimensions.
Although this work is motivated for and focused on machine learning hyperparameter problems, our algorithm is designed for any optimization problem with an objective function that can be decomposed as a sum of functions with monotonicity constraints. An example is to optimize a product’s price for maximum gain. A simple but widely used model for this problem is that gain equals with for demand, for price and for cost. Cost is assumed to be constant, demand is a monotonically decreasing function of price. Hence optimize is equivalent to optimize which are monotonic with respect to . Evaluating this is sometimes expensive since given
may be modeled using complicated game theory models. A more complex example was given byGoettler & Gordon (2011), where they proved three monotonicities of decomposed expected gain given the current product price and amount of investment for CPU companies.
In conclusion, we propose a Bayesian optimization algorithm for objective functions that can be decomposed as a sum of functions with monotonicity constraints. We demonstrate that many machine learning hyperparameter tuning problems fall into this family. We provide experimental results for an artificially designed problem, regularized linear regression, random forest classification, and a convolutional neural network for image classification. They show our algorithm outperforms the standard Bayesian optimization method based on direct modeling of the validation error.
- Brochu et al. (2010) Brochu, Eric, Cora, Vlad M, and de Freitas, Nando. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010.
Fan et al. (2005)
Fan, Wei, Greengrass, Ed, McCloskey, Joe, Yu, Philip S, and Drammey, K.
Effective estimation of posterior probabilities: Explaining the accuracy of randomized decision tree approaches.In Data Mining, Fifth IEEE International Conference on, pp. 8–pp. IEEE, 2005.
- Goettler & Gordon (2011) Goettler, Ronald L and Gordon, Brett R. Does amd spur intel to innovate more? Journal of Political Economy, 119(6):1141–1200, 2011.
- Golchi et al. (2015) Golchi, Shirin, Bingham, Derek R, Chipman, Hugh, and Campbell, David A. Monotone emulation of computer experiments. SIAM/ASA Journal on Uncertainty Quantification, 3(1):370–392, 2015.
- Goodfellow et al. (2016) Goodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep learning. MIT press, 2016.
- Hutter et al. (2011) Hutter, Frank, Hoos, Holger H, and Leyton-Brown, Kevin. Sequential model-based optimization for general algorithm configuration. LION, 5:507–523, 2011.
- Jauch & Peña (2016) Jauch, Michael and Peña, Víctor. Bayesian optimization with shape constraints. arXiv preprint arXiv:1612.08915, 2016.
- Jones et al. (1998) Jones, Donald R, Schonlau, Matthias, and Welch, William J. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492, 1998.
- Krizhevsky & Hinton (2009) Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images. 2009.
- Krizhevsky et al. (2012) Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
- Lichman (2013) Lichman, M. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
- Lin & Dunson (2014) Lin, Lizhen and Dunson, David B. Bayesian monotone regression using gaussian process projection. Biometrika, 101(2):303–317, 2014.
- Martinez-Cantin et al. (2007) Martinez-Cantin, Ruben, de Freitas, Nando, Doucet, Arnaud, and Castellanos, José A. Active policy learning for robot planning and exploration under uncertainty. In Robotics: Science and Systems, volume 3, pp. 334–341, 2007.
- (14) MATLAB Statistics and Machine Learning Toolbox. Matlab statistics and machine learning toolbox, R2017b.
- McKay et al. (1979) McKay, M. D., Beckman, R. J., and Conover, W. J. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 21(2):239–245, 1979.
- Rasmussen & Williams (2006) Rasmussen, Carl Edward and Williams, Christopher KI. Gaussian process for machine learning. MIT press, 2006.
Riihimäki & Vehtari (2010)
Riihimäki, Jaakko and Vehtari, Aki.
Gaussian processes with monotonicity information.
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 645–652, 2010.
- Snoek et al. (2012) Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical Bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pp. 2951–2959, 2012.
- Wang & Berger (2016) Wang, Xiaojing and Berger, James O. Estimating shape constrained functions using gaussian processes. SIAM/ASA Journal on Uncertainty Quantification, 4(1):1–25, 2016.
- Zou & Hastie (2005) Zou, Hui and Hastie, Trevor. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005.