DeepAI AI Chat
Log In Sign Up

Weighted Sampling for Combined Model Selection and Hyperparameter Tuning

by   Dimitrios Sarigiannis, et al.

The combined algorithm selection and hyperparameter tuning (CASH) problem is characterized by large hierarchical hyperparameter spaces. Model-free hyperparameter tuning methods can explore such large spaces efficiently since they are highly parallelizable across multiple machines. When no prior knowledge or meta-data exists to boost their performance, these methods commonly sample random configurations following a uniform distribution. In this work, we propose a novel sampling distribution as an alternative to uniform sampling and prove theoretically that it has a better chance of finding the best configuration in a worst-case setting. In order to compare competing methods rigorously in an experimental setting, one must perform statistical hypothesis testing. We show that there is little-to-no agreement in the automated machine learning literature regarding which methods should be used. We contrast this disparity with the methods recommended by the broader statistics literature, and identify the most suitable approach. We then select three popular model-free solutions to CASH and evaluate their performance, with uniform sampling as well as the proposed sampling scheme, across 67 datasets from the OpenML platform. We investigate the trade-off between exploration and exploitation across the three algorithms, and verify empirically that the proposed sampling distribution improves performance in all cases.


Automatic Exploration of Machine Learning Experiments on OpenML

Understanding the influence of hyperparameters on the performance of a m...

MOFA: Modular Factorial Design for Hyperparameter Optimization

Automated hyperparameter optimization (HPO) has shown great power in man...

Fast Hyperparameter Tuning for Ising Machines

In this paper, we propose a novel technique to accelerate Ising machines...

Massively Parallel Hyperparameter Tuning

Modern learning models are characterized by large hyperparameter spaces....

Automatic prior selection for meta Bayesian optimization with a case study on tuning deep neural network optimizers

The performance of deep neural networks can be highly sensitive to the c...

OBOE: Collaborative Filtering for AutoML Initialization

Algorithm selection and hyperparameter tuning remain two of the most cha...

Hyperparameter-free deep active learning for regression problems via query synthesis

In the past decade, deep active learning (DAL) has heavily focused upon ...