Sherpa: Robust Hyperparameter Optimization for Machine Learning

05/08/2020 ∙ by Lars Hertel, et al. ∙ University of California, Irvine University of Hawaii 111

Sherpa is a hyperparameter optimization library for machine learning models. It is specifically designed for problems with computationally expensive, iterative function evaluations, such as the hyperparameter tuning of deep neural networks. With Sherpa, scientists can quickly optimize hyperparameters using a variety of powerful and interchangeable algorithms. Sherpa can be run on either a single machine or in parallel on a cluster. Finally, an interactive dashboard enables users to view the progress of models as they are trained, cancel trials, and explore which hyperparameter combinations are working best. Sherpa empowers machine learning practitioners by automating the more tedious aspects of model tuning. Its source code and documentation are available at https://github.com/sherpa-ai/sherpa.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Motivation and significance

Hyperparameters are tuning parameters of machine learning models. Hyperparameter optimization refers to the process of choosing optimal hyperparameters for a machine learning model. This optimization is crucial to obtain optimal performance from the machine learning model. Since hyperparameters cannot be directly learned from the training data, their optimization is often a process of trial and error conducted manually by the researcher. There are two problems with the trial and error approach. Firstly, it is time consuming and can take days or even weeks of the researcher’s attention. Secondly, it is dependent on the researcher’s ability to interpret results and choose good hyperparameter settings. These limitations lead to a large need to automate this process. Sherpa is a software that addresses this need.

Existing hyperparameter optimization software can be divided into bayesian optimization software, bandit and evolutionary algorithm software, framework specific software, and all-round software. Software that implements bayesian optimization started with

SMAC (Hutter et al., 2011), Spearmint (Snoek et al., 2012), and HyperOpt (Bergstra et al., 2013). More recent software in this regime has been GPyOpt (authors, 2016), RoBo (Klein et al., 2017), DragonFly (Kandasamy et al., 2019), Cornell-MOE (Wu and Frazier, 2016; Wu et al., 2017), and mlrMBO (Bischl et al., 2017). These software packages have high quality, stand-alone bayesian optimization implementations, often with unique twists. However, most of these do not provide infrastructure for parallel training.

As an alternative to bayesian optimization, multi-armed bandits and evolutionary algorithms have recently become popular. HpBandSter implements Hyperband (Li et al., 2017) and BOHB (Falkner et al., 2018), Pbt implements Population Based Training (Jaderberg et al., 2017), PyCMA implements CMA-ES (Igel et al., 2006), and TPot (Olson et al., 2016b, a)

provides hyperparameter search via genetic programming.

A number of framework specific libraries have also been proposed. Auto-Weka (Kotthoff et al., 2017) and Auto-Sklearn (Feurer et al., 2015) focus on WEKA (Holmes et al., 1994) and Scikit-learn (Pedregosa et al., 2011)

, respectively. Furthermore, a number of packages have been proposed for the machine learning framework Keras

(Chollet and others, 2015). Hyperas, Auto-Keras (Jin et al., 2019), Talos, Kopt, and HORD each provide hyperparameter optimization specifically for Keras. These libraries make it easy to get started due to their tight integration with the machine learning framework. However, researchers will inevitably run into limitations when a different machine learning framework is needed.

Lastly, a number of implementations aim at being framework agnostic and also support multiple optimization algorithms. Table 1 shows a detailed comparison of these ”all-round” packages to Sherpa. Note that we excluded Google Vizier (Golovin et al., 2017) and similar frameworks from other cloud computing providers since these are not free to use.

Software Distributed Visualizations Bayesian- Optimization Evolutionary Bandit/ Early-stopping
Sherpa Yes Yes Yes Yes Yes
Advisor Yes No Yes Yes Yes
Chocolate Yes No Yes Yes No
Test-Tube(Falcon, 2017) Yes No No No No
Ray-Tune(Liaw et al., 2018) Yes No No Yes Yes
Optuna(Akiba et al., 2019) Yes Yes Yes No Yes
BTB (Gustafson, 2018) No No Yes No Yes
Table 1: Feature comparison of hyperparameter optimization frameworks. Bayesian optimization, evolutionary, and bandit/early-stopping refer to the support of hyperparameter optimization algorithms based on these methods.

Sherpa is already being used in a wide variety of applications such as machine learning methods Sadowski and Baldi (2018), solid state physics Cao et al. (2019), particle physics Baldi et al. (2019), medical image analysis Ritter et al. (2019), and cyber securityLangford et al. (2019). Due to the fact that the number of machine learning applications is growing rapidly we can expect there to be a growing need for hyperparameter optimization software such as Sherpa.

2 Software Description

2.1 Hyperparameter Optimization

We begin by laying out the components of a hyperparameter optimization. Consider the training of a machine learning model. A user has a model that is being trained with data. Before training there are hyperparameters that need to be set. At the end of the training we obtain an objective value.

This workflow can be illustrated via the training of a neural network. The model is a neural network. The data are images that the neural network is trained on. The hyperparameter setting is the number of hidden layers of the neural network. The objective is the prediction accuracy on a hold-out dataset obtained at the end of training.

For automated hyperparameter optimization we also need hyperparameter ranges, a results table, and a hyperparameter optimization algorithm. The hyperparameter ranges define what values each hyperparameter is allowed to take. The results store hyperparameter settings and their associated objective value. Finally, the algorithm takes results and ranges and produces a new suggestion for a hyperparameter setting. We refer to this suggestion as a trial.

For the neural network example the hyperparameter range might be 1, 2, 3, or 4 hidden layers. We might have previous results that 1 corresponds to 80% accuracy and 3 to 90% accuracy. The algorithm might then produce a new trial with 4 hidden layers. After training the neural network with 4 hidden layers we find it achieves 88% accuracy and add this to the results. Then the next trial is suggested.

2.2 Components

We now describe how Sherpa implements the components described in Section 2.1. Sherpa implements hyperparameter ranges as sherpa.Parameter objects. The algorithm is implemented as a sherpa.algorithms.Algorithm object. A list of hyperparameter ranges and an algorithm are combined to create a sherpa.Study (Figure 1). The study stores the results. Trials are implemented as sherpa.Trial objects.

Figure 1: Diagram showing Sherpa’s Study class.

Sherpa implements two user interfaces. We will refer to the two interfaces as API mode and parallel mode.

2.3 API Mode

In API mode the user interacts with the Study object. Given a study s:

  1. A new trial of name t is obtained by calling s.get_suggestion() or by iterating over the study (e.g. for t in s).

  2. First, t.parameters is used to initialize and train a machine learning model. Then s.add_observation(t, objective=o) is called to add objective o for trial t. Invalid observations are automatically excluded from the results.

  3. Finally, s.finalize(t) informs Sherpa that the model training is finished.

Interacting with the Study class is easy. It also requires minimal setup. The limitation in API mode is that it cannot evaluate trials in parallel.

2.4 Parallel Mode

In parallel-mode multiple trials can be evaluated in parallel. The user provides two scripts: a server script and a machine learning (ML) script. The server script defines the hyperparameter ranges, the algorithm, the job scheduler, and the command to execute the machine learning script. The optimization starts by calling sherpa.optimize.
In the machine learning script the user trains the machine learning model given some hyperparameters and adds the resulting objective value to Sherpa. Using a sherpa.Client called c a trial t is obtained by calling c.get_trial(). To add observations c.send_metrics(trial=t, objective=o) is used.
Internally, sherpa.optimize runs a loop that uses the Study class. Figure 2 illustrates the parallel-mode architecture.

  1. The loop submits new trials if resources are available by submitting a job to the scheduler. Furthermore, the new trials are added to a database. From there they can be retrieved by the client.

  2. The loop updates results by querying the database for new results.

  3. Finally, the loop checks whether jobs have finished. This means resources are free again. In addition, the corresponding trials can be finalized.

If the user’s machine learning script does not submit an objective value such as when it crashed, Sherpa continues with the next trial.

Figure 2: Architecture diagram for parallel hyperparameter optimization in Sherpa. The user only interacts with Sherpa via the solid red arrows, everything else happens internally.

3 Software Functionalities

3.1 Available Hyperparameter Types

Sherpa supports four hyperparameter types:

  • sherpa.Continuous

  • sherpa.Discrete

  • sherpa.Choice

  • sherpa.Ordinal.

These correspond to a range of floats, a range of integers, an unordered categorical variable, and an ordered categorical variable, respectively. Each parameter has

name and range

arguments. The range expects a list defining lower and upper bound for continuous and discrete variables. For choice and ordinal variables the range expects the categories.

3.2 Diversity of Algorithms

Sherpa aims to help researchers at various stages in their model development. For this reason, it provides a choice of hyperparameter tuning algorithms. The following optimization algorithms are currently supported.

  • sherpa.algorithms.RandomSearch:
    Random Search Bergstra and Bengio (2012) samples hyperparameter settings uniformly from the specified ranges. It is a robust algorithm because it explores the space uniformly. Furthermore, with the dashboard the user can make their own inference on the results.

  • sherpa.algorithms.GridSearch:
    Grid Search follows a grid over the hyperparameter space and evaluates all combinations. It is useful to systematically explore one or two hyperparameters. It is not recommended for more than two hyperparameters.

  • sherpa.algorithms.bayesian_optimization.GPyOpt:
    Bayesian optimization is a model-based search. For each trial it picks the most promising hyperparameter setting based on prior results. Sherpa’s implementation wraps the package GPyOpt (authors, 2016).

  • sherpa.algorithms.successive_halving.SuccessiveHalving:
    Asynchronous Successive Halving (ASHA) (Li et al., 2018) is a hyperparameter optimization algorithm based on multi-armed bandits. It allows the efficient exploration of a large hyperparameter space. This is accomplished by the early stopping of unpromising trials.

  • sherpa.algorithms.PopulationBasedTraining:
    Population-based Training (PBT) (Jaderberg et al., 2017) is an evolutionary algorithm. The algorithm jointly optimizes a population of models and their hyperparameters. This is achieved by adjusting hyperparameters during training. It is particularly suited for neural network training hyperparameters such as learning rate, weight decay, or batch size.

  • sherpa.algorithms.LocalSearch:

    Local Search is a heuristic algorithm. It starts with a seed hyperparameter setting. During optimization it randomly perturbs one hyperparameter at a time. If a setting improves on the seed then it becomes the new seed. This algorithm is particularly useful if the user already has a well performing hyperparameter setting.

All implemented algorithms allow parallel evaluation and can be used with all available parameter types. An empirical comparison of the algorithms can be found in the documentation111https://parameter-sherpa.readthedocs.io/en/latest/algorithms/algorithms.html.

3.3 Accounting for Random Variation

Sherpa can account for variation via the Repeat algorithm. The objective value of a model may vary between training runs. Reasons for this can be random initialization or stochastic training. The Repeat algorithm runs each hyperparameter setting multiple times. Thus variation can be taken into account when analyzing results.

3.4 Visualization Dashboard

Sherpa provides an interactive web-based dashboard. It allows the user to monitor progress of the hyperparameter optimization in real time. Figure 3 shows a screenshot of the dashboard.

At the top of the dashboard is a parallel coordinates plot (Inselberg and Dimsdale, 1987; Hauser et al., 2002). It allows exploration of relationships between hyperparameter settings and objective values (Figure 3 top). Each vertical axis corresponds to a hyperparameter or the objective. The axes can be brushed over to select subsets of trials. The plot is implemented using the D3.js parallel-coordinates library by Chang (2019). At the bottom right is a line chart. It shows objective values against training iteration (Figure 3 bottom right). This chart allows to monitor training progress of each trial. It is also useful to analyze whether a trial’s training converged. At the bottom left is a table of all completed trials (Figure 3 bottom left). Hovering over trials in the table highlights the corresponding lines in the plots. Finally, the dashboard has a stopping button (Figure 3 top right corner). This allows the user to cancel the training for unpromising trials.

The dashboard runs automatically during a hyperparameter optimization. It can be accessed in a web-browser via a link provided by Sherpa. The dashboard is useful to quickly evaluate questions such as:

  • Are the selected hyperparameter ranges appropriate?

  • Is training unstable for some hyperparameter settings?

  • Does a particular hyperparameter have little impact on the performance of the machine learning algorithm?

  • Are the best observed hyperparameter settings consistent?

Based on these observations the user can refine the hyperparameter ranges or choose a different algorithm, if appropriate.

Figure 3: The dashboard provides a parallel coordinates plot (top) and a table of finished trials (bottom left). Trials in progress are shown via a progress line chart (bottom right). Figure recommended to be viewed as PDF and via zooming in.

3.5 Scaling up with a Cluster

In parallel mode Sherpa can run parallel evaluations. A job scheduler is responsible for running the user’s machine learning script. The following job schedulers are implemented.

  • The LocalScheduler evaluates parallel trials on the same computation node. This scheduler is useful for running on multiple local CPU cores or GPUs. It has a simple resource handler for GPU allocation (see Figure 5 for an example).

  • The SGEScheduler uses Sun Grid Engine (SGE) (Gentzsch, 2001). Submission arguments and an environment profile can be specified via arguments to the scheduler.

  • The SLURMScheduler is based on SLURM (Yoo et al., 2003). Its interface is similar to the SGEScheduler.

Concurrency between workers is handled via MongoDB, a NoSQL database program. Parallel mode expects that MongoDB is installed on the system.

4 Illustrative Examples

4.1 Handwritten Digits Classification with a Neural Network

The following is an example of a Sherpa hyperparameter optimization. It uses the MNIST handwritten digits dataset Deng (2012)

. A Keras neural network is used to classify the digits. The neural network has one hidden layer and a softmax output. The hyperparameters are the learning rate of the Adam

Kingma and Ba (2014)

optimizer, the number of hidden units, and the hidden layer activation function. The search is first conducted using Sherpa’s API mode. After that we show the same example using Sherpa’s parallel mode.

4.1.1 API Mode

Figure 4 shows the hyperparameter optimization in Sherpa’s API mode. The script starts with imports and loading of the MNIST dataset. Next, the hyperparameters learning_rate, num_units, and activation are defined. These refer to the Adam learning rate, number of hidden layer units, and hidden layer activation function, respectively. As optimization algorithm the GPyOpt algorithm is chosen. Hyperparameter ranges and algorithm are combined via the Study. The lower_is_better flag indicates that lower objective values are not better. This is because we will be maximizing the classification accuracy. After that a for-loop iterates over the study. The for-loop yields a trial at each iteration. A Keras model is instantiated using the hyperparameter settings. The Keras model is iteratively trained and evaluated via an inner for-loop. We add an observation for each iteration and use finalize after the training is finished. Note that we pass the loss as context to add_observation. The context accepts a dictionary with any additional metrics that the user wants to record. Code to replicate this example is available as a Jupyter notebook222https://github.com/sherpa-ai/sherpa/blob/master/examples/keras_mnist_mlp.ipynb and on Google Colab333https://colab.research.google.com/drive/1I19R1GfKPjlgNdHlxJwNC4PitvySsdon. A video tutorial is also available on YouTube444https://youtu.be/-exnF3uv0Ws. Tutorials using the Successive Halving and Population Based Training algorithms are also available555https://github.com/sherpa-ai/sherpa/blob/master/examples/keras_mnist_mlp_successive_halving.ipynb666https://github.com/sherpa-ai/sherpa/blob/master/examples/keras_mnist_mlp_population_based_training.ipynb.

import sherpa
import sherpa.algorithms.bayesian_optimization as bayesian_optimization
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.datasets import mnist
from keras.optimizers import Adam
epochs = 15
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train/255.0, x_test/255.0
# Sherpa setup
parameters = [sherpa.Continuous(’learning_rate’, [1e-4, 1e-2]),
              sherpa.Discrete(’num_units’, [32, 128]),
              sherpa.Choice(’activation’,
                            [

relu

, ’tanh’, ’sigmoid’])]
algorithm = bayesian_optimization.GPyOpt(max_num_trials=50)
study = sherpa.Study(parameters=parameters,
                     algorithm=algorithm,
                     lower_is_better=False)
for trial in study:
    lr = trial.parameters[’learning_rate’]
    num_units = trial.parameters[’num_units’]
    act = trial.parameters[’activation’]
    # Create model
    model = Sequential([Flatten(input_shape=(28, 28)),
                        Dense(num_units, activation=act),
                        Dense(10, activation=’softmax’)])
    optimizer = Adam(lr=lr)
    model.compile(loss=’sparse_categorical_crossentropy’,
                  optimizer=optimizer,
                  metrics=[’accuracy’])
    # Train model
    for i in range(epochs):
        model.fit(x_train, y_train)
        loss, accuracy = model.evaluate(x_test, y_test)
        study.add_observation(trial=trial, iteration=i,
                              objective=accuracy,
                              context={’loss’: loss})
    study.finalize(trial=trial)
Figure 4: An example showing how to tune the hyperparameters of a neural network on the MNIST dataset using Sherpa in API mode.

4.1.2 Parallel Mode

We now show the same hyperparameter optimization using Sherpa’s parallel mode. Figure 5 (top) shows the server script. First, the hyperparameters and search algorithm are defined. This time we also define a LocalScheduler instance. Hyperparameters, algorithm, and scheduler are passed to the sherpa.optimize function. We also pass a command ”python trial.py”. The command indicates how to execute the user’s machine learning script. Furthermore, the argument max_concurrent=2 indicates that two evaluations will be running at a time. Figure 5 (bottom) shows the machine learning script. First, we set environment variables for GPU configuration. Next we create a Client. To obtain hyperparameters we call the client’s get_trial method. Furthermore, during training we call the client’s send_metrics method. This replaces add_observation in parallel mode. Also, in parallel mode no finalize call is needed.

import sherpa
import sherpa.algorithms.bayesian_optimization as bayesian_optimization
from sherpa.schedulers import LocalScheduler
params = [sherpa.Continuous(’learning_rate’, [1e-4, 1e-2]),
              sherpa.Discrete(’num_units’, [32, 128]),
              sherpa.Choice(’activation’,
                            [’relu’, ’tanh’, ’sigmoid’])]
alg = bayesian_optimization.GPyOpt(max_num_trials=50)
sched = LocalScheduler(resources=[0,1])
sherpa.optimize(parameters=params, algorithm=alg,
                scheduler=sched, lower_is_better=False,
                command=’python trial.py’, max_concurrent=2)
import sherpa
import os
GPU_ID = os.environ[’SHERPA_RESOURCE’]
os.environ[’CUDA_VISIBLE_DEVICES’] = GPU_ID
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.datasets import mnist
from keras.optimizers import Adam
epochs = 15
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train/255.0, x_test/255.0
# Sherpa client
client = sherpa.Client()
trial = client.get_trial()
lr = trial.parameters[’learning_rate’]
num_units = trial.parameters[’num_units’]
act = trial.parameters[’activation’]
# Create model
model = Sequential([Flatten(input_shape=(28, 28)),
                    Dense(num_units, activation=act),
                    Dense(10, activation=’softmax’)])
optimizer = Adam(lr=lr)
model.compile(loss=’sparse_categorical_crossentropy’,
              optimizer=optimizer,
              metrics=[’accuracy’])
# Train model
for i in range(epochs):
    model.fit(x_train, y_train)
    loss, accuracy = model.evaluate(x_test, y_test)
    client.send_metrics(trial=trial, iteration=i,
                        objective=accuracy,
                        context={’loss’: loss})
Figure 5: A code listing showing how to use Sherpa in parallel mode to tune the hyperparameters of a neural network trained on the handwritten digits dataset MNIST. The top code listing shows the server-script. The bottom listing shows the trial-script.

4.2 Deep learning for Cloud Resolving Models

4.2.1 Introduction

The following illustrates an example of a Sherpa hyperparameter optimization in the field of climate modeling, specifically cloud resolving models (CRM). We apply Sherpa to optimize the deep neural network (DNN) of Rasp et al. (2018).

The input to the model is a 94-dimensional vector. Features include temperature, humidity, meridional wind, surface pressure, incoming solar radiation, sensible heat flux, and latent heat flux. The output of the DNN is a 65-dimensional vector. It is composed of the sum of the CRM and radiative heating rates, the CRM moistening rate, the net radiative fluxes at the top of the atmosphere and surface of the earth, and the observed precipitation.

4.2.2 General Hyperparameter Optimization

Initially a random search was conducted on the following hyperparameters: batch normalization

(Ioffe and Szegedy, 2015), dropout (Srivastava et al., 2014; Baldi and Sadowski, 2013), Leaky ReLU coefficient (Agostinelli et al., 2014), learning rate, nodes per hidden layer, number of hidden layers. The parameter ranges were chosen to encompass the parameters specified in Rasp et al. (2018). From the dashboard (Figure 6) we identify that the best performing configurations have low dropout, leaky ReLU coefficients mostly around 0.3 or larger, and learning rates mostly near 0.002. The majority of good models have 8 layers and batch normalization. However, the number of units does not seem to have a large impact. The hyperparameter ranges and best configuration are provided in Tables 2 and 3 in the appendix.

4.2.3 Optimization of the Learning Rate Schedule

An additional search was conducted to fine-tune the DNN training hyperparameters. Specifically, the initial learning rate and the learning rate decay were optimized. The range of initial learning rate values was of the best value from Section 4.2.2. The range of learning rate decay factors was to . The learning rate gets multiplied by this factor after every epoch to produce a new learning rate. In comparison, the model in Rasp et al. (2018) uses a decay factor of approximately . The remaining hyperparameters were set to the best configuration from Section 4.2.2. A total of 50 trials were evaluated via random search. The best learning rate was found to be . The best decay value was found as . The overall optimal hyperparameter setting is shown in Table 3 of the supplementary materials.

4.2.4 Results

We compare the model found by Sherpa to the model from Rasp et al. (2018) via plots (Figure 7). The plots show the coefficient of determination at different pressures and latitudes. We find that the Sherpa model consistently outperforms the comparison model. In particular, it is able to perform for latitudes for which the prior model fails. Figure 6(f) shows that the Sherpa model’s loss reduces further after the Rasp et al. (2018) model has converged. This is the result of the learning rate fine-tuning from Section 4.2.3.

5 Impact

Machine learning is used to ever larger extends in the scientific community. Nearly every machine learning application can benefit from hyperparameter optimization. The issue is that researchers often do not have a practical tool at hand. Therefore, they usually resort to manually tuning parameters. Sherpa aims to be this tool. Its goal is to require minimal learning from the user to get started. It also aims to support the user as their needs for parallel evaluation or exotic optimization algorithms grow. As shown by references in Section 1, Sherpa is already being used by researchers to achieve improvements in a variety of domains. In addition to that, the software has been downloaded more than 6000 times from the PyPi Python package manager777https://pepy.tech/project/parameter-sherpa. It also has over 160 stars on the software hosting website GitHub. A GitHub star means that another user has added the software to a personal list for later reference.

6 Conclusions

Sherpa is a flexible open-source software for robust hyperparameter optimization of machine learning models. It provides the user with several interchangeable hyperparameter optimization algorithms, each of which may be useful at different stages of model development. Its interactive dashboard allows the user to monitor and analyze the results of multiple hyperparameter optimization runs in real-time. It also allows the user to see patterns in the performance of hyperparameters to judge the robustness of individual settings. Sherpa can be used on a laptop or in a distributed fashion on a cluster. In summary, rather than a black-box that spits out one hyperparameter setting, Sherpa provides the tools that a researcher needs when doing hyperparameter exploration and optimization for the development of machine learning models.

7 Conflict of Interest

We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.

Acknowledgements

We would like to thank Amin Tavakoli, Christine Lee, Gregor Urban, and Siwei Chen for helping test the software and providing useful feedback, and Yuzo Kanomata for computing support. This material is based upon work supported by the National Science Foundation under grant number 1633631. We also wish to acknowledge a hardware grant from NVIDIA.

References

  • F. Agostinelli, M. Hoffman, P. Sadowski, and P. Baldi (2014) Learning activation functions to improve deep neural networks. arXiv preprint arXiv:1412.6830. Cited by: Table 2, §4.2.2.
  • T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama (2019) Optuna: a next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2623–2631. Cited by: Table 1.
  • T. G. authors (2016) GPyOpt: a bayesian optimization framework in python. Note: http://github.com/SheffieldML/GPyOpt Cited by: §1, item 3.
  • P. Baldi, J. Bian, L. Hertel, and L. Li (2019)

    Improved energy reconstruction in nova with regression convolutional neural networks

    .
    Physical Review D 99 (1), pp. 012011. Cited by: §1.
  • P. Baldi and P. J. Sadowski (2013) Understanding dropout. In Advances in neural information processing systems, pp. 2814–2822. Cited by: Table 2, §4.2.2.
  • J. Bergstra and Y. Bengio (2012) Random search for hyper-parameter optimization. Journal of Machine Learning Research 13 (Feb), pp. 281–305. Cited by: item 1.
  • J. Bergstra, D. Yamins, and D. D. Cox (2013) Hyperopt: a python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in Science Conference, pp. 13–20. Cited by: §1.
  • B. Bischl, J. Richter, J. Bossek, D. Horn, J. Thomas, and M. Lang (2017) mlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions. External Links: Link, 1703.03373 Cited by: §1.
  • Z. Cao, Y. Dan, Z. Xiong, C. Niu, X. Li, S. Qian, and J. Hu (2019) Convolutional neural networks for crystal material property prediction using hybrid orbital-field matrix and magpie descriptors. Crystals 9 (4), pp. 191. Cited by: §1.
  • K. Chang (2019) Parallel coordinates. Note: https://github.com/syntagmatic/parallel-coordinates Cited by: §3.4.
  • F. Chollet et al. (2015) Keras. Note: https://keras.io Cited by: §1.
  • L. Deng (2012)

    The mnist database of handwritten digit images for machine learning research [best of the web]

    .
    IEEE Signal Processing Magazine 29 (6), pp. 141–142. Cited by: §4.1.
  • W.A. Falcon (2017) Test tube. GitHub. Note: https://github.com/williamfalcon/test-tube Cited by: Table 1.
  • S. Falkner, A. Klein, and F. Hutter (2018) BOHB: robust and efficient hyperparameter optimization at scale. arXiv preprint arXiv:1807.01774. Cited by: §1.
  • M. Feurer, A. Klein, K. Eggensperger, J. Springenberg, M. Blum, and F. Hutter (2015) Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems, pp. 2962–2970. Cited by: §1.
  • W. Gentzsch (2001) Sun grid engine: towards creating a compute power grid. In Cluster Computing and the Grid, 2001. Proceedings. First IEEE/ACM International Symposium on, pp. 35–36. Cited by: item 2.
  • D. Golovin, B. Solnik, S. Moitra, G. Kochanski, J. Karro, and D. Sculley (2017) Google vizier: a service for black-box optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1487–1495. Cited by: §1.
  • L. Gustafson (2018) Bayesian tuning and bandits: an extensible, open source library for automl. M. Eng Thesis, Massachusetts Institute of Technology, Cambridge, MA. External Links: Link Cited by: Table 1.
  • H. Hauser, F. Ledermann, and H. Doleisch (2002) Angular brushing of extended parallel coordinates. In Information Visualization, 2002. INFOVIS 2002. IEEE Symposium on, pp. 127–130. Cited by: §3.4.
  • G. Holmes, A. Donkin, and I. H. Witten (1994) Weka: a machine learning workbench. Cited by: §1.
  • F. Hutter, H. H. Hoos, and K. Leyton-Brown (2011) Sequential model-based optimization for general algorithm configuration. In International Conference on Learning and Intelligent Optimization, pp. 507–523. Cited by: §1.
  • C. Igel, T. Suttorp, and N. Hansen (2006) A computational efficient covariance matrix update and a (1+ 1)-cma for evolution strategies. In

    Proceedings of the 8th annual conference on Genetic and evolutionary computation

    ,
    pp. 453–460. Cited by: §1.
  • A. Inselberg and B. Dimsdale (1987) Parallel coordinates for visualizing multi-dimensional geometry. In Computer Graphics 1987, pp. 25–44. Cited by: §3.4.
  • S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: Table 2, §4.2.2.
  • M. Jaderberg, V. Dalibard, S. Osindero, W. M. Czarnecki, J. Donahue, A. Razavi, O. Vinyals, T. Green, I. Dunning, K. Simonyan, et al. (2017) Population based training of neural networks. arXiv preprint arXiv:1711.09846. Cited by: §1, item 5.
  • H. Jin, Q. Song, and X. Hu (2019) Auto-keras: an efficient neural architecture search system. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1946–1956. Cited by: §1.
  • K. Kandasamy, K. R. Vysyaraju, W. Neiswanger, B. Paria, C. R. Collins, J. Schneider, B. Poczos, and E. P. Xing (2019) Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly. arXiv preprint arXiv:1903.06694. Cited by: §1.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
  • A. Klein, S. Falkner, N. Mansur, and F. Hutter (2017) RoBO: a flexible and robust bayesian optimization framework in python. In NIPS 2017 Bayesian Optimization Workshop, Cited by: §1.
  • L. Kotthoff, C. Thornton, H. H. Hoos, F. Hutter, and K. Leyton-Brown (2017) Auto-weka 2.0: automatic model selection and hyperparameter optimization in weka. The Journal of Machine Learning Research 18 (1), pp. 826–830. Cited by: §1.
  • Z. Langford, L. Eisenbeiser, and M. Vondal (2019) Robust signal classification using siamese networks. In Proceedings of the ACM Workshop on Wireless Security and Machine Learning, pp. 1–5. Cited by: §1.
  • L. Li, K. Jamieson, A. Rostamizadeh, E. Gonina, M. Hardt, B. Recht, and A. Talwalkar (2018) Massively parallel hyperparameter tuning. arXiv preprint arXiv:1810.05934. Cited by: item 4.
  • L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar (2017) Hyperband: a novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research 18 (1), pp. 6765–6816. Cited by: §1.
  • R. Liaw, E. Liang, R. Nishihara, P. Moritz, J. E. Gonzalez, and I. Stoica (2018) Tune: a research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118. Cited by: Table 1.
  • R. S. Olson, N. Bartley, R. J. Urbanowicz, and J. H. Moore (2016a)

    Evaluation of a tree-based pipeline optimization tool for automating data science

    .
    In Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO ’16, New York, NY, USA, pp. 485–492. External Links: ISBN 978-1-4503-4206-3, Link, Document Cited by: §1.
  • R. S. Olson, R. J. Urbanowicz, P. C. Andrews, N. A. Lavender, L. C. Kidd, and J. H. Moore (2016b) Applications of evolutionary computation: 19th european conference, evoapplications 2016, porto, portugal, march 30 – april 1, 2016, proceedings, part i. G. Squillero and P. Burelli (Eds.), pp. 123–137. External Links: ISBN 978-3-319-31204-0, Document, Link Cited by: §1.
  • F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al. (2011) Scikit-learn: machine learning in python. Journal of machine learning research 12 (Oct), pp. 2825–2830. Cited by: §1.
  • S. Rasp, M. S. Pritchard, and P. Gentine (2018) Deep learning to represent subgrid processes in climate models. Proceedings of the National Academy of Sciences 115 (39), pp. 9684–9689. Cited by: Figure 7, Appendix A, §4.2.1, §4.2.2, §4.2.3, §4.2.4.
  • C. Ritter, T. Wollmann, P. Bernhard, M. Gunkel, D. M. Braun, J. Lee, J. Meiners, R. Simon, G. Sauter, H. Erfle, et al. (2019) Hyperparameter optimization for image analysis: application to prostate tissue images and live cell data of virus-infected cells. International journal of computer assisted radiology and surgery, pp. 1–11. Cited by: §1.
  • P. Sadowski and P. Baldi (2018) Neural network regression with beta, dirichlet, and dirichlet-multinomial outputs. Cited by: §1.
  • J. Snoek, H. Larochelle, and R. P. Adams (2012) Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pp. 2951–2959. Cited by: §1.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15 (1), pp. 1929–1958. Cited by: Table 2, §4.2.2.
  • J. Wu and P. Frazier (2016) The parallel knowledge gradient method for batch bayesian optimization. In Advances in Neural Information Processing Systems, pp. 3126–3134. Cited by: §1.
  • J. Wu, M. Poloczek, A. G. Wilson, and P. I. Frazier (2017) Bayesian optimization with gradients. In Advances in Neural Information Processing Systems, pp. 5267–5278. Cited by: §1.
  • A. B. Yoo, M. A. Jette, and M. Grondona (2003) Slurm: simple linux utility for resource management. In Workshop on Job Scheduling Strategies for Parallel Processing, pp. 44–60. Cited by: item 3.

Appendix A Deep learning for Cloud Resolving Models

Initially a random search was conducted on the hyperparameters listed in Table 2.

Name Options Parameter Type
Batch NormalizationIoffe and Szegedy (2015) [yes, no] Choice
DropoutSrivastava et al. (2014); Baldi and Sadowski (2013) [0, 0.25] Continuous
Leaky ReLU coefficientAgostinelli et al. (2014) [0 - 0.4] Continuous
Learning Rate [0.0001 - 0.01] Continuous (log)
Nodes per Layer [200 - 300] Discrete
Number of layers [8 - 10] Discrete
Table 2: DNN Hyperparameter Search Space.

A screenshot of the Sherpa dashboard at the end of the hyperparameter optimization is shown in Figure 6 (recommended to be viewed as PDF and via zooming in). On the dashboard layer_x refers to the number of nodes in layer . From Figure 6 one can see that the best performing configurations have low dropout, leaky ReLU coefficients mostly around 0.3 or larger, and learning rates mostly near 0.002. The majority of good models have 8 layers and batch normalization. However, the number of units does not seem have a large impact.

Figure 6: Screenshot of the dashboard at the end of the initial random search. The 8 best trials were selected by brushing of the Objective axis in the parallel coordinates plot.

Following the secondary search for an optimal learning rate schedule (Section 4.2.3) the hyperparameters in Table 3) were found to be overall optimal. The optimized learning rate and schedule found by Sherpa is of considerable importance. Referencing the loss curves in Figure 6(f) one can see the learning rate schedule used in Rasp et al. (2018) forces the learning rate to decay rapidly causing an early plateau of the loss. The learning rate schedule discovered by Sherpa on the other hand allows the DNN to keep learning, further reducing the loss.

Batch Normalization No
Dropout 0.0
Leaky ReLU coefficient 0.3957
Learning Rate 0.001301
Learning Rate Decay 0.843784
Nodes per Layer [299, 269, 248, 293, 251, 281, 258, 277, 209, 270]
Number of layers 10
Table 3: Best hyperparameter configuration found by Sherpa.

Figure 7 displays results of the optimized model as they pertain to climate modeling metrics. These plots denote values at corresponding pressures and latitudes. Larger values of the

indicate that the DNN is able to explain more variance in the corresponding variable. Of particular importance, are areas where Sherpa is able to perform well in regions where the previously published model fails (e.g. latitudes between -25 and 25 in Figure

6(c)). At all pressures and latitudes the Sherpa model outperforms the previously published model and thereby achieves a new state of the art for this dataset.

(a)
(b)
(c)
(d)
(e)
(f)
Figure 7: Case study results for an optimized deep neural network applied to cloud resolving models. Figures 6(a) and 6(b) show the coefficient of determination vs. pressure for convective heating rate and convective moistening rate, respectively. Figures 6(c)6(d), and 6(e) show values against latitude, and 6(f) shows loss trajectories. All figures compare the optimized Sherpa model against the model developed by Rasp et al. (2018).