Multi-level Training and Bayesian Optimization for Economical Hyperparameter Optimization

07/20/2020
by   Yang Yang, et al.
9

Hyperparameters play a critical role in the performances of many machine learning methods. Determining their best settings or Hyperparameter Optimization (HPO) faces difficulties presented by the large number of hyperparameters as well as the excessive training time. In this paper, we develop an effective approach to reducing the total amount of required training time for HPO. In the initialization, the nested Latin hypercube design is used to select hyperparameter configurations for two types of training, which are, respectively, heavy training and light training. We propose a truncated additive Gaussian process model to calibrate approximate performance measurements generated by light training, using accurate performance measurements generated by heavy training. Based on the model, a sequential model-based algorithm is developed to generate the performance profile of the configuration space as well as find optimal ones. Our proposed approach demonstrates competitive performance when applied to optimize synthetic examples, support vector machines, fully connected networks and convolutional neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2016

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

Bayesian optimization has become a successful tool for hyperparameter op...
research
12/19/2017

Hyperparameters Optimization in Deep Convolutional Neural Network / Bayesian Approach with Gaussian Process Prior

Convolutional Neural Network is known as ConvNet have been extensively u...
research
12/11/2019

Bayesian Hyperparameter Optimization with BoTorch, GPyTorch and Ax

Deep learning models are full of hyperparameters, which are set manually...
research
03/13/2020

Accelerating and Improving AlphaZero Using Population Based Training

AlphaZero has been very successful in many games. Unfortunately, it stil...
research
02/24/2022

DC and SA: Robust and Efficient Hyperparameter Optimization of Multi-subnetwork Deep Learning Models

We present two novel hyperparameter optimization strategies for optimiza...
research
03/24/2020

Model-based Asynchronous Hyperparameter Optimization

We introduce a model-based asynchronous multi-fidelity hyperparameter op...
research
02/27/2020

Using a thousand optimization tasks to learn hyperparameter search strategies

We present TaskSet, a dataset of tasks for use in training and evaluatin...

Please sign up or login with your details

Forgot password? Click here to reset