Using Sequential Statistical Tests to Improve the Performance of Random Search in hyperparameter Tuning

12/23/2021
by   Philip Buczak, et al.
0

Hyperparamter tuning is one of the the most time-consuming parts in machine learning: The performance of a large number of different hyperparameter settings has to be evaluated to find the best one. Although modern optimization algorithms exist that minimize the number of evaluations needed, the evaluation of a single setting is still expensive: Using a resampling technique, the machine learning method has to be fitted a fixed number of K times on different training data sets. As an estimator for the performance of the setting the respective mean value of the K fits is used. Many hyperparameter settings could be discarded after less than K resampling iterations, because they already are clearly inferior to high performing settings. However, in practice, the resampling is often performed until the very end, wasting a lot of computational effort. We propose to use a sequential testing procedure to minimize the number of resampling iterations to detect inferior parameter setting. To do so, we first analyze the distribution of resampling errors, we will find out, that a log-normal distribution is promising. Afterwards, we build a sequential testing procedure assuming this distribution. This sequential test procedure is utilized within a random search algorithm. We compare a standard random search with our enhanced sequential random search in some realistic data situation. It can be shown that the sequential random search is able to find comparably good hyperparameter settings, however, the computational time needed to find those settings is roughly halved.

READ FULL TEXT

page 15

page 16

research
07/19/2018

Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks

Most learning algorithms require the practitioner to manually set the va...
research
06/06/2017

Random Search for Hyperparameters using Determinantal Point Processes

We propose the use of k-determinantal point processes in hyperparameter ...
research
08/17/2022

Random Search Hyper-Parameter Tuning: Expected Improvement Estimation and the Corresponding Lower Bound

Hyperparameter tuning is a common technique for improving the performanc...
research
11/06/2019

Auptimizer – an Extensible, Open-Source Framework for Hyperparameter Tuning

Tuning machine learning models at scale, especially finding the right hy...
research
06/30/2021

A Critical Analysis of Recursive Model Indexes

The recursive model index (RMI) has recently been introduced as a machin...
research
07/29/2020

Quantity vs. Quality: On Hyperparameter Optimization for Deep Reinforcement Learning

Reinforcement learning algorithms can show strong variation in performan...
research
01/08/2020

HyperSched: Dynamic Resource Reallocation for Model Development on a Deadline

Prior research in resource scheduling for machine learning training work...

Please sign up or login with your details

Forgot password? Click here to reset