Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization

05/13/2019 ∙ by Aaron Klein, et al. ∙ University of Freiburg 0

Due to the high computational demands executing a rigorous comparison between hyperparameter optimization (HPO) methods is often cumbersome. The goal of this paper is to facilitate a better empirical evaluation of HPO methods by providing benchmarks that are cheap to evaluate, but still represent realistic use cases. We believe these benchmarks provide an easy and efficient way to conduct reproducible experiments for neural hyperparameter search. Our benchmarks consist of a large grid of configurations of a feed forward neural network on four different regression datasets including architectural hyperparameters and hyperparameters concerning the training pipeline. Based on this data, we performed an in-depth analysis to gain a better understanding of the properties of the optimization problem, as well as of the importance of different types of hyperparameters. Second, we exhaustively compared various different state-of-the-art methods from the hyperparameter optimization literature on these benchmarks in terms of performance and robustness.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Despite the tremendous success achieved by deep neural networks in the last few years (Krizhevsky et al., 2012; Sutskever et al., 2014), using them in practice remains challenging due to their sensitivity to many hyperparameters and architectural choices. Even experts often only find the right setting to train the network successfully by trial-and-error. There has been a recent line of work in hyperparameter optimization (HPO) (Snoek et al., 2012; Hutter et al., 2011; Bergstra et al., 2011; Li et al., 2017; Klein et al., 2017a; Falkner et al., 2018)111for a review see Feurer and Hutter (2018) and neural architecture search (NAS) (Baker et al., 2017; Zoph and Le, 2017; Real et al., 2017; Elsken et al., 2019; Liu et al., 2019) that tries to automate this process by casting it as an optimization problem. However, since each function evaluation consists of training and evaluating a deep neural network, running these methods can take several hours or even days.

We believe that this hinders advancing the field since thorough evaluation is key to develop new methods and, due to their internal randomness, requires many independent runs of every method to get robust statistical results. Recent work (Eggensperger et al., 2015) proposed to use surrogate benchmarks, which replace the original benchmark by a regression model trained on data generated offline. During optimization, instead of training and validating the actual hyperparameter configuration, the regression model is queried and its prediction is returned to the optimizer. Orthogonally to this work, we performed an exhaustive search for a large neural architecture search problem and compiled all architecture and performance pairs into a neural architecture search benchmark (Ying et al., 2019).

For the current work, we collected a large grid of hyperparameter configurations of feed forward neural networks (see Section 2) for regression. Based on the gathered data, we give an in-depth analysis of the properties of the optimization problem (see Section 3), as well as of the importance of hyperparameters and architectural choices (see Section 4

). Finally, we benchmark a variety of well-known HPO methods from the literature, such as Bayesian optimization, evolutionary algorithms, reinforcement learning, a bandit based method and random-search (Section

5) leading to new insights on how the different methods compare. The dataset, as well as the code to carry out these experiments is publicly available at https://github.com/automl/nas_benchmarks.

2 Setup

We use 4 popular UCI (Lichman, 2013) datasets for regression: protein structure (Rana, 2013), slice localization (Graf et al., 2011), naval propulsion (Coraddu et al., 2014) and parkinsons telemonitoring (Tsanas et al., 2010). We call them HPO-Bench-Protein, HPO-Bench-Slice, HPO-Bench-Naval and HPO-Bench-Parkinson, respectively. For each dataset we used for training, for validation and for testing (see Table 1

for an overview) and removed features that were constant over the entire dataset. Afterwards, all features and targets values were normalized by subtracting the mean and dividing by the variance of the training data. These datasets do not require deeper neural network architectures which means we can train them on CPUs rather than GPUs and hence we can afford to run many configurations.

Dataset training datapoints validation datapoints test datapoints features
HPO-Bench-Protein 27 438 9 146 9 146 9
HPO-Bench-Slice 32 100 10 700 10 700 385
HPO-Bench-Naval 7 160 2 388 2 388 15
HPO-Bench-Parkinson 3 525 1 175 1 175 20
Table 1: Dataset splits

As the base architecture, we used a two layer feed forward neural network followed by a linear output layer on top. The configuration space (denoted in Table 2

) only includes a modest number of 4 architectural choice (number of units and activation functions for both layers) and 5 hyperparameters (dropout rates per layer, batch size, initial learning rate and learning rate schedule) in order to allow for an exhaustive evaluation of all the 62 208 configurations resulting from discretizing the hyperparameters as in Table 

2. We encode numerical hyperparameters as ordinals and all other hyperparameters as categoricals. Each network was trained with Adam (Kingma and Ba, 2015)

for 100 epochs, optimizing the mean squared error. We repeated the training of each configuration 4 independent times with a different seed for the random number generator and recorded for each run the training / validation / test accuracy, training time and the number of trainable parameters. We provide full learning curves (i. e. validation and training error for each epoch) as an additional fidelity that can be used to benchmark multi-fidelity algorithms with the number of epochs as the budget.

Hyperparameters Choices
Initial LR
Batch Size
LR Schedule
Activation/Layer 1
Activation/Layer 2
Layer 1 Size
Layer 2 Size
Dropout/Layer 1
Dropout/Layer 2
Table 2: Configuration space of the fully connected neural network

3 Dataset Statistics

We now analyze the properties of these datasets. First, for each dataset we computed the empirical cumulative distribution function (ECDF) of the test, validation and training error after 100 epochs and the total training time. For each metric, we averaged over the 4 repetitions. Additionally we computed the ECDF for the number of trainable parameters of each neural network architecture. To avoid clutter, we show here only the results for the HPO-Bench-Protein which we found to be consistent with the other datasets and present all results in Section 

A in the supplemental material.

One can see in Figure 1

that the mean-squared-error (MSE) for training, validation and test is spread over an order of magnitude or more. On one side only a small subset of configurations achieve a final MSE lower than 0.3 and on the other hand many outliers exist that achieve errors orders of magnitude above the average. Furthermore, due to the changing number of parameters, also the training time varies dramatically across configurations.

Figure 1: The empirical cumulative distribution (ECDF) of the average train/valid/test error after 100 epochs of training (upper left), the number of parameters (upper right), the training runtime (lower left) and the noise for different number of epochs (lower right) computed on HPO-NAS-bench-Protein. See Appendix A for the ECDF plots of all datasets.

Figure 1

bottom right shows the empirical cumulative distribution of the noise, defined as the standard deviation between the 4 repetitions for different number of epochs. We can see that the noise is heteroscedastic. That is, different configurations come with a different noise level. As expected, the noise decreases with an increasing number of epochs.

For many multi-fidelity hyperparameter optimization methods, such as Hyperband (Li et al., 2017) or BOHB (Falkner et al., 2018), it is essential that the ranking of configurations on smaller budgets to higher budgets is preserved. In Figures 2, we visualize the Spearman rank correlation between the performance of all hyperparameter configurations across different number of epochs and the highest budget of 100 epochs. Since every hyperparameter optimization method needs to mainly focus on the top performing configurations, we also show the correlation for only the top , , , and of all configurations. As expected the correlation to the highest budget increases with increasing budgets. If only top-performing configurations are considered, the correlation decreases, since their final performances are closer to each other.

Figure 2: The Spearman rank correlation between different number of epochs to the highest budget of 100 epochs for the HPO-Bench-Protein when we consider all configurations or only the top , , , and of all configurations based on their average test error. Results for other datasets are presented in Appendix A.

4 Hyperparameter Importance

We now analyze how the different hyperparameters affect the final performance, first globally with help of the functional ANOVA (Sobol, 1993; Hutter et al., 2014) and then from a more local point of view. Finally, we show how the top performing hyperparameter configurations correlate across the different datasets. As in the previous section, we show here only the results for HPO-Bench-Protein and for all other dataset in Appendix B.

4.1 Functional ANOVA

To analyze the importance of hyperparameters, assessing the change of the final error with respect to changing a single hyperparameter at a time, we used the fANOVA tool by Hutter et al. (2014)

. It quantifies the importance of a hyperparameter by marginalizing the error obtained by setting it to a specific value over all possible values of all other hyperparameters. The importance of a hyperparameter is then the variation in error that is explained by this hyperparameter. In default setting this tool fits a random forest model on the observed function values in order to compute the marginal predictions. However, since we already evaluated the full configuration space, we do not even need to use a model and can compute the required integrals directly.

As can be seen in Figure 3 (upper right), on average across the entire configuration space, the initial learning rate obtained the highest importance value. However, the importance of individual hyperparameters is very small due to a few outliers with very high errors, which only happen for a few combinations of several hyperparameter values. We also computed the importance values of hyperparameter configuration pairs (see Figure 3 lower right for the ten most important pairs). These general small values for single and pairwise hyperparameters indicates that the benchmarks exhibit higher order interaction effects. Unfortunately, computing higher than second order interaction effect is computational infeasible.

A better estimate of hyperparameter importance in a region of the configuration space with reasonable performance can be obtained by only using the best performing configuration for the fANOVA. Figure 

3 (left) shows the results of this procedure with the percentile and Figure 3 (middle) with the percentile of all configuration. This shows that in this more interesting part of the configuration space, other hyperparameters also become important.

Figure 3: Top row: Importance of the different hyperparameter based on the fANOVA for: (left) only the top ; (middle) top ; (right) all configurations. Bottom row: most important hyperparameter pairs with (left) only the top ; (middle) top ; (right) all configurations.

4.2 Local Neighbourhood

While the fANOVA takes the whole configuration space into account, we now focus on a more local view around the best configuration (incumbent) to see how robust it is against small perturbations. We show in Table 3 the change in performance if we flip single hyperparameters of the incumbent while keeping all other hyperparameters fixed. Additionally, we also show in the rightmost column the relative change between the error of the incumbent and the new observed error .

Interestingly, the highest drop in performance occurs by changing the activation function of the first and the second layer from relu to tanh. This is despite the fact that tanh is a much more common activation function for regression than relu. In contrast, increasing the batch size only has a marginal effect on the performance.

Hyperparameter Change Test Error Relative Change
Batch Size 0.2163 0.0042
Initial LR 0.2169 0.0072
Layer 2 Size 0.2203 0.0231
Layer 1 0.2216 0.0288
Dropout/Layer 2 0.2257 0.0478
LR Schedule 0.2269 0.0534
Dropout/Layer 2 0.2280 0.0587
Dropout/Layer 1 0.2307 0.0711
Activation/Layer 2 0.2875 0.3351
Activation/Layer 1 0.3012 0.3987
Table 3: Performance change if single hyperparameters of the incumbent (average test error 0.2153) are flipped.

4.3 Ranking across Datasets

Figure 4: Correlation of the ranks for (left) top- / (middle) top- and all hyperparameter configurations across all four datasets.

We now analyse hyperparameter configurations across the four different datasets. In Table 4 we can see that the best configuration in terms of average test error changes only slightly across datasets. For some hyperparameters, such as the learning rate, a certain value, in this case, can be used for all all datasets whereas other hyperparameters, for example the activation functions, need to be set differently.

To see how the performance of all hyperparameter configurations correlates across datasets, we computed for every configuration on every dataset its rank in terms of final average test performance. Figure 4 shows the Spearman rank correlation between the different datasets if we consider the first percentile (left), the th percentile (middle) or all configurations (right). The correlation decreases if we only consider the best-performing configurations, which implies that it does not suffice to reuse a good configuration from a different datasets to achieve top performance on a new dataset. Nevertheless, the correlation for all configurations is high which indicates that multi-task methods could be able to exploit previously collected data.

Hyperparameters HPO-Bench-Protein HPO-Bench-Slice HPO-Bench-Naval HPO-Bench-Parkinson
Initial LR 0.0005 0.0005 0.0005 0.0005
Batch Size 8 32 8 8
LR Schedule cosine cosine cosine cosine
Activation/Layer 1 relu relu tanh tanh
Activation/Layer 2 relu tanh relu relu
Layer 1 Size 512 512 128 128
Layer 2 Size 512 512 512 512
Dropout/Layer 1 0.0 0.0 0.0 0.0
Dropout/Layer 2 0.3 0.0 0.0 0.0
Table 4: Best configurations in terms of average test error for each dataset

5 Comparison

In this section we use the generated benchmarks to evaluate different HPO methods. To mimic the randomness that comes with evaluating a configuration, in each function evaluation we randomly sample one of the four performance values. To obtain a realistic estimate of the wall-clock time required for each optimizer, we accumulated the stored runtime of each configuration the optimizer evaluated. We do not take the additional overhead of the optimizer into account since it is negligible compared to the training time of the neural network. After each function evaluation we estimate the incumbent as the configuration with the lowest observed error and compute the regret between the incumbent and the globally best configuration in terms of test error. We performed 500 independent runs of each method and report the median and the 25th and 75th quantile.

5.1 Performance over Time

We compared the following HPO methods from the literature (see Figure 5): random search (Bergstra and Bengio, 2012), SMAC (Hutter et al., 2011)222We used SMAC3 from https://github.com/automl/SMAC3, Tree Parzen Estimator (TPE) (Bergstra et al., 2011)333We used Hyperopt from https://github.com/hyperopt/hyperopt, Bohamiann (Springenberg et al., 2016)444We used the implementation from Klein et al. (2017b), Regularized Evolution (Real et al., 2019), Hyperband (HB) (Li et al., 2017) and BOHB (Falkner et al., 2018)555For both HB and BOHB we used the implementation from https://github.com/automl/HpBandSter. Inspired by the recent success of reinforcement learning for neural architecture search (Zoph and Le, 2017), we also include a similar reinforcement learning strategy (RL), which however does not use an LSTM as controller but instead uses REINFORCE (Williams, 1992)

to optimize the probability of each categorical variable directly 

(Ying et al., 2019). Each method that operates on the full budget of 100 epochs was allowed to perform 500 function evaluations. For BOHB and HB we set the minimum budget to 3 epochs, the maximum budget to 100, to 3 and the number of successive halving iterations to 125 (which leads to roughly the same amount of function evaluation time as the other methods). More details about the meta-parameters of the different optimizers are described in Appendix C.

Figure 5 left show the performance over time for all methods. Results for the other datasets can be found in Appendix C. We can make the following observations:

  • As expected, Bayesian optimization methods, i. e. SMAC, TPE and Bohamiann worked as well as RS in the beginning but started to perform superior once they obtained a meaningful model. Interestingly, while all Bayesian optimization methods start improving at roughly the same time, they converge to different optima, which we attribute to their different internal models.

  • The same holds for BOHB, which is in the beginning as good as HB but starts outperforming it as soon as it obtains a meaningful model. Note that, compared to TPE, BOHB uses a multivariate KDE, instead of a univariate KDE, which is able to model interactions between hyperparameter configurations. We attributed TPE’s superior performance over BOHB on these benchmarks to its very aggressive optimization of the acquisition function. BOHB’s performance could be probably improved by optimizing its own meta-parameters since its default values were determined on continuous benchmarks (Falkner et al., 2018) (where it outperformed TPE).

  • HB achieved a reasonable performance relatively quickly but only slightly improves over simple RS eventually.

  • RE needed more time than Bayesian optimization methods to outperform RS; however, it achieved the best final performance, since, compared to Bayesian optimization methods, it does not suffer from any model missmatch.

  • RL requires even more time to improve upon RS than RE or Bayesian optimization and seems to be too sample inefficient for these tasks.

Figure 5: Left: Comparison of various HPO methods on the HPO-Bench-Protein datasets. For each method, we plot the median and the 25th and 75th quantile (shaded area) of the test regret of the incumbent (determined based on the validation performance) across 500 independent runs. Right: Empirical cumulative distribution of the final performance over all runs of each methods after seconds.

5.2 Robustness

Besides achieving good performance, we argue that robustness plays an important role in practice for HPO methods. Figure 5 shows the empirical cumulative distribution of the test regret for the final incumbent after seconds for HPO-Bench-Protein across all 500 runs of each method.

While RE achieves a lower mean test regret than TPE it seems to be less robust with respect to its internal randomness. Interestingly, while all methods have non-zero probability to achieve a final test regret of within seconds, only Bohamiann, RE and TPE are able to achieve this regret in more than of the cases. Also none of the methods is able to converge consistently to the same final regret.

6 Conclusions

We presented new tabular benchmarks for neural architecture and hyperparameter search that are cheap to evaluate but still recover the original optimization problem, enabling us to rigorously compare various methods from the literature. Based on the data we generated for these benchmarks, we had a closer look at the difficulty of the optimization problem and the importance of different hyperparameters.

In future work, we will generate more of these benchmarks for other architectures and datasets. Ultimately, we hope that such benchmarks will help the community to easily reproduce experiments and evaluate new developed methods without spending enormous compute resources.

References

  • Baker et al. (2017) Baker, B., Gupta, O., Naik, N., and Raskar, R. (2017). Designing neural network architectures using reinforcement learning. In International Conference on Learning Representations (ICLR’17).
  • Bergstra et al. (2011) Bergstra, J., Bardenet, R., Bengio, Y., and Kégl, B. (2011). Algorithms for hyper-parameter optimization. In Proceedings of the 24th International Conference on Advances in Neural Information Processing Systems (NIPS’11).
  • Bergstra and Bengio (2012) Bergstra, J. and Bengio, Y. (2012). Random search for hyper-parameter optimization.

    Journal of Machine Learning Research

    .
  • Coraddu et al. (2014) Coraddu, A., Oneto, L., Ghio, A., Savio, S., Anguita, D., and Figari, M. (2014). Machine learning approaches for improving condition based maintenance of naval propulsion plants. Journal of Engineering for the Maritime Environment.
  • Eggensperger et al. (2015) Eggensperger, K., Hutter, F., Hoos, H., and Leyton-Brown, K. (2015). Efficient benchmarking of hyperparameter optimizers via surrogates. In

    Proceedings of the 29th National Conference on Artificial Intelligence (AAAI’15)

    .
  • Elsken et al. (2019) Elsken, T., Metzen, J. H., and Hutter, F. (2019). Efficient multi-objective neural architecture search via lamarckian evolution. In International Conference on Learning Representations (ICLR’19).
  • Falkner et al. (2018) Falkner, S., Klein, A., and Hutter, F. (2018). BOHB: Robust and efficient hyperparameter optimization at scale. In Proceedings of the 35th International Conference on Machine Learning (ICML 2018).
  • Feurer and Hutter (2018) Feurer, M. and Hutter, F. (2018). Hyperparameter optimization. In Automatic Machine Learning: Methods, Systems, Challenges. Springer.
  • Graf et al. (2011) Graf, F., Kriegel, H. P., Schubert, M., Pölsterl, S., and Cavallaro, A. (2011). 2d image registration in ct images using radial image descriptors. In Medical Image Computing and Computer-Assisted Intervention (MICCAI’11).
  • Hutter et al. (2011) Hutter, F., Hoos, H., and Leyton-Brown, K. (2011). Sequential model-based optimization for general algorithm configuration. In Proceedings of the Fifth International Conference on Learning and Intelligent Optimization (LION’11).
  • Hutter et al. (2014) Hutter, F., Hoos, H., and Leyton-Brown, K. (2014). An efficient approach for assessing hyperparameter importance. In Proceedings of the 31th International Conference on Machine Learning (ICML’14).
  • Kingma and Ba (2015) Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR’15).
  • Klein et al. (2017a) Klein, A., Falkner, S., Bartels, S., Hennig, P., and Hutter, F. (2017a). Fast Bayesian optimization of machine learning hyperparameters on large datasets. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS’17).
  • Klein et al. (2017b) Klein, A., Falkner, S., Mansur, N., and Hutter, F. (2017b). Robo: A flexible and robust bayesian optimization framework in python. In NIPS Workshop on Bayesian Optimization (BayesOpt’17).
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems (NIPS’12).
  • Li et al. (2017) Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., and Talwalkar, A. (2017). Hyperband: Bandit-based configuration evaluation for hyperparameter optimization. In International Conference on Learning Representations (ICLR’17).
  • Lichman (2013) Lichman, M. (2013). UCI machine learning repository.
  • Liu et al. (2019) Liu, H., Simonyan, K., and Yang, Y. (2019). DARTS: Differentiable architecture search. In International Conference on Learning Representations (ICLR’19).
  • Rana (2013) Rana, P. S. (2013). Physicochemical properties of protein tertiary structure data set.
  • Real et al. (2019) Real, E., Aggarwal, A., Huang, Y., and Le, Q. V. (2019).

    Regularized Evolution for Image Classifier Architecture Search.

    In Proceedings of the Conference on Artificial Intelligence (AAAI’19).
  • Real et al. (2017) Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y. L., Tan, J., Le, Q. V., and Kurakin, A. (2017). Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning (ICML’17).
  • Snoek et al. (2012) Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. In Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems (NIPS’12).
  • Sobol (1993) Sobol, I. M. (1993). Sensitivity estimates for nonlinear mathematical models. Mathematical Modeling and Computational Experiment.
  • Springenberg et al. (2016) Springenberg, J. T., Klein, A., Falkner, S., and Hutter, F. (2016). Bayesian optimization with robust bayesian neural networks. In Proceedings of the 29th International Conference on Advances in Neural Information Processing Systems (NIPS’16).
  • Sutskever et al. (2014) Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Advances in Neural Information Processing Systems (NIPS’14).
  • Tsanas et al. (2010) Tsanas, A., Little, M. A., McSharry, P. E., and Ramig, L. O. (2010). Accurate telemonitoring of parkinson’s disease progression by noninvasive speech tests. IEEE Transactions on Biomedical Engineering.
  • Williams (1992) Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning.
  • Ying et al. (2019) Ying, C., Klein, A., Real, E., Christiansen, E., Murphy, K., and Hutter, F. (2019). NAS-Bench-101: Towards reproducible neural architecture search. arXiv:1902.09635 [cs.LG].
  • Zoph and Le (2017) Zoph, B. and Le, Q. V. (2017). Neural architecture search with reinforcement learning. In International Conference on Learning Representations (ICLR’17).

Appendix A Dataset Statistics

We now show the empirical cumulative distribution (ECDF) of all four datasets for: the mean squared error for training, validation and test (Figure 6), the number of parameters (Figure 7), the measured wall-clock time for training (Figure 8) and the noise, defined as standard deviation between the individual trials of each configuration (Figure 9) . Note that we computed the ECDF of the mean squared error and the runtime based on the average over the four trials.

Figure 10 shows the Spearman rank correlation between the performance of a hyperparameter configuration after training for the final budget of 100 epochs and its performance after training for the corresponding number of epochs on the x-axis. We also show the correlation if only the top and configurations are taken into account.

Figure 6: The empirical cumulative distribution (ECDF) of the average train/valid/test error after 100 epochs of training, computed on HPO-Bench-Protein (left), HPO-Bench-Slice (left middle), HPO-Bench-Naval (right middle) and HPO-Bench-Parkinson (right).
Figure 7: The empirical cumulative distribution (ECDF) of the number of parameters, computed on HPO-Bench-Protein (left), HPO-Bench-Slice (left middle), HPO-Bench-Naval (right middle) and HPO-Bench-Parkinson (right).
Figure 8: The empirical cumulative distribution (ECDF) of the training runtime, computed on HPO-Bench-Protein (left), HPO-Bench-Slice (left middle), HPO-Bench-Naval (right middle) and HPO-Bench-Parkinson (right).
Figure 9: The empirical cumulative distribution (ECDF) of the noise across the 4 repeated training processes for each configuration, computed on HPO-Bench-Protein (left), HPO-Bench-Slice (left middle), HPO-Bench-Naval (right middle) and HPO-Bench-Parkinson (right).
Figure 10: The Spearman rank correlation between different number of epochs for the HPO-Bench-Protein (left), HPO-Bench-Slice (left middle), HPO-Bench-Naval (right middle) and HPO-Bench-Parkinson (right) when we consider all configurations or only the top , , , and of all configurations based on their test error.

Appendix B Hyperparameter Importance

Figure 11, 12, 13 and 14 show the importance values based on the fANOVA tool for the top , top and all configurations as well as the most important pair-wise plots for HPO-Bench-Naval, HPO-Bench-Parkinson, HPO-Bench-Protein and HPO-Bench-Slice, respectively. Table 5, 6, 7 and 8 show the local neighbourhood for HPO-Bench-Naval, HPO-Bench-Parkinson, HPO-Bench-Protein and HPO-Bench-Slice.

Figure 11: HPOBench-Naval. Top row: Importance of the different hyperparameter based on the fANOVA for: (left) only the top ; (middle) top ; (right) all configurations. Bottom row: most important hyperparameter pairs with (left) only the top ; (middle) top ; (right) all configurations.
Figure 12: HPOBench-Parkinson. Top row: Importance of the different hyperparameter based on the fANOVA for: (left) only the top ; (middle) top ; (right) all configurations. Bottom row: most important hyperparameter pairs with (left) only the top ; (middle) top ; (right) all configurations.
Figure 13: HPOBench-Protein. Top row: Importance of the different hyperparameter based on the fANOVA for: (left) only the top ; (middle) top ; (right) all configurations. Bottom row: most important hyperparameter pairs with (left) only the top ; (middle) top ; (right) all configurations.
Figure 14: HPOBench-Slice. Top row: Importance of the different hyperparameter based on the fANOVA for: (left) only the top ; (middle) top ; (right) all configurations. Bottom row: most important hyperparameter pairs with (left) only the top ; (middle) top ; (right) all configurations.
Hyperparameter Change Test Error Relative Change
Layer 1 Size 0.0000 0.1331
Initial LR 0.0000 0.1751
Layer 1 Size 0.0000 0.2196
Layer 2 Size 0.0000 0.4929
Batch Size 0.0000 0.5048
Activation/Layer 1 0.0000 0.6933
Activation/Layer 2 0.0002 4.9685
Dropout/Layer 2 0.0004 11.1872
Dropout/Layer 1 0.0010 34.3490
LR Schedule 0.0063 217.0092
Table 5: HPO-Bench-Naval: Performance change if single hyperparameters of the incumbent (average test error 0.000029) are flipped.
Hyperparameter Change Test Error Relative Change
Layer 1 Size 0.0051 0.2142
Layer 2 Size 0.0054 0.2740
Batch Size 0.0059 0.3962
Dropout/Layer 1 0.0081 0.9012
Batch Size 0.0085 1.0068
Activation/Layer 1 0.0106 1.5100
Initial LR 0.0111 1.6268
Activation/Layer 2 0.0178 3.1980
Initial LR 0.0189 3.4530
Dropout/Layer 2 0.0216 4.0912
LR Schedule 0.1407 32.1805
Table 6: HPO-Bench-Parkinson: Performance change if single hyperparameters of the incumbent (average test error 0.004239) are flipped.
Hyperparameter Change Test Error Relative Change
Batch Size 0.2163 0.0042
Initial LR 0.2169 0.0072
Layer 2 Size 0.2203 0.0231
Layer 1 Size 0.2216 0.0288
Dropout/Layer 2 0.2257 0.0478
LR Schedule 0.2269 0.0534
Dropout/Layer 2 0.2280 0.0587
Dropout/Layer 1 0.2307 0.0711
Activation/Layer 2 0.2875 0.3351
Activation/Layer 1 0.3012 0.3987
Table 7: HPO-Bench-Protein: Performance change if single hyperparameters of the incumbent (average test error 0.2153) are flipped.
Hyperparameter Change Test Error Relative Change
Layer 1 Size 0.0002 0.0831
Batch Size 0.0002 0.2014
Dropout/Layer 1 0.0002 0.2514
Layer 2 Size 0.0002 0.2535
Batch Size 0.0002 0.3087
Activation/Layer 1 0.0002 0.3383
Activation/Layer 2 0.0002 0.6326
Initial LR 0.0003 0.7668
Dropout/Layer 2 0.0006 3.3757
LR Schedule 0.0007 4.0016
Table 8: HPO-Bench-Slice: Performance change if single hyperparameters of the incumbent (average test error 0.000144) are flipped.

Appendix C Comparison HPOBench

We now present a more detail discussion on how we set the meta-parameters of the individual optimizer for our comparison. Code to reproduce the experiments is available at https://github.com/automl/nas_benchmarks.

Random Search (RS)

: We sample hyperparameter configurations from a uniform distribution over all possible hyperparameter configurations.

Hyperband: We set the which means that in each successive halving step only a third of the configurations are promoted to the next step. The minimum budget is set to 4 epochs and the maximum budget to 100 epochs of training.

BOHB: As for Hyperband we set and keep the same minimum and maximum budgets. The minimum possible bandwidth for the KDE is set to to prevent that the probability mass collapses to a single value. The bandwidth factor is set to 3, the number of samples to optimize the acquisition function is 64, and the fraction of random configurations is set to which are the default values for BOHB.

TPE: We used all predefined meta-parameter values from the Hyperopt package, since the python interface does not allow to change the meta-parameters.

SMAC: We set the maximum number of allowed function evaluations per configuration to 4. The number of trees for the random forest was set to 10 and the fraction of random configurations was set to which are also the default values in the SMAC3 package.

Regularized Evolution (RE): To mutate architectures, we first sample uniformly at random a hyperparameter and then sample a new value from the set of all possible values except the current one. RE has two main hyperparameters, the population size and the tournament size, which we set to and , respectively.

Reinforcement Learning (RL): Starting from a uniform distribution over the values of each hyperparameter, we used REINFORCE to optimize the probability values directly (see also Ying et al. (2019). After performing a grid search, we set the learning rate for REINFORCE to and used an exponential moving average as baseline for the reward function with a momentum of .

Bohamiann: We used a 3 layer fully connected neural network with 50 units and tanh activation functions in each layer. We set the step length for the adaptive SGHMC sampler (Springenberg et al., 2016) to and the batch size to . Starting from a chain length of 20000 steps, the number of burn-in steps was linearly increased by a factor of 10 times the number of observed function values. To optimize the acquisition function, we used a simple local search method, that, starting from a random configuration, evaluates the one-step neighborhood and then jumps to the neighbor with the highest acquisition value until it either reaches the maximum number of steps or converges to a local optimum.

Figure 15

shows the comparison and the robustness of all considered hyperparameter optimization methods on the four tabular benchmarks. We performed 500 independent runs for each method and report the mean and the standard error of the mean across all runs. For a detailed analysis of the results see the main text.

Figure 15: Left column: Comparison of various HPO methods on all the datasets. For each method, we plot the median and the 25th and 75th quantile of the test regret of the incumbent (determined based on the validation performance) across 500 independent runs. Right column: The empirical cumulative distribution of the final regret over all runs after seconds for HPO-Bench-Protein, seconds for HPO-Bench-Slice, for HPO-Bench-Naval and seconds for HPO-Bench-Parkinson.