A Lipschitz Bandits Approach for Continuous Hyperparameter Optimization

02/03/2023
by   Yasong Feng, et al.
0

One of the most critical problems in machine learning is HyperParameter Optimization (HPO), since choice of hyperparameters has a significant impact on final model performance. Although there are many HPO algorithms, they either have no theoretical guarantees or require strong assumptions. To this end, we introduce BLiE – a Lipschitz-bandit-based algorithm for HPO that only assumes Lipschitz continuity of the objective function. BLiE exploits the landscape of the objective function to adaptively search over the hyperparameter space. Theoretically, we show that (i) BLiE finds an ϵ-optimal hyperparameter with O ( 1/ϵ)^d_z + β total budgets, where d_z and β are problem intrinsic; (ii) BLiE is highly parallelizable. Empirically, we demonstrate that BLiE outperforms the state-of-the-art HPO algorithms on benchmark tasks. We also apply BLiE to search for noise schedule of diffusion models. Comparison with the default schedule shows that BLiE schedule greatly improves the sampling speed.

READ FULL TEXT

page 11

page 21

research
02/07/2015

Hyperparameter Search in Machine Learning

We introduce the hyperparameter search problem in the field of machine l...
research
05/19/2023

From Random Search to Bandit Learning in Metric Measure Spaces

Random Search is one of the most widely-used method for Hyperparameter O...
research
12/11/2020

Better call Surrogates: A hybrid Evolutionary Algorithm for Hyperparameter optimization

In this paper, we propose a surrogate-assisted evolutionary algorithm (E...
research
04/03/2020

Weighted Random Search for Hyperparameter Optimization

We introduce an improved version of Random Search (RS), used here for hy...
research
03/21/2016

Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization

Performance of machine learning algorithms depends critically on identif...
research
07/11/2020

An Asymptotically Optimal Multi-Armed Bandit Algorithm and Hyperparameter Optimization

The evaluation of hyperparameters, neural architectures, or data augment...
research
01/26/2019

A Practical Bandit Method with Advantages in Neural Network Tuning

Stochastic bandit algorithms can be used for challenging non-convex opti...

Please sign up or login with your details

Forgot password? Click here to reset