FLO: Fast and Lightweight Hyperparameter Optimization for AutoML

11/12/2019
by   Chi Wang, et al.
58

Integrating ML models in software is of growing interest. Building accurate models requires right choice of hyperparameters for training procedures (learners), when the training dataset is given. AutoML tools provide APIs to automate the choice, which usually involve many trials of different hyperparameters for a given training dataset. Since training and evaluation of complex models can be time and resource consuming, existing AutoML solutions require long time or large resource to produce accurate models for large scale training data. That prevents AutoML to be embedded in a software which needs to repeatedly tune hyperparameters and produce models to be consumed by other components, such as large-scale data systems. We present a fast and lightweight hyperparameter optimization method FLO and use it to build an efficient AutoML solution. Our method optimizes for minimal evaluation cost instead of number of iterations to find accurate models. Our main idea is to leverage a holistic consideration of the relations among model complexity, evaluation cost and accuracy. FLO has a strong anytime performance and significantly outperforms Bayesian Optimization and random search for hyperparameter tuning on a large open source AutoML Benchmark. Our AutoML solution also outperforms top-ranked AutoML libraries in a majority of the tasks on this benchmark.

READ FULL TEXT

page 13

page 18

research
02/08/2023

Two-step hyperparameter optimization method: Accelerating hyperparameter search by using a fraction of a training dataset

Hyperparameter optimization (HPO) can be an important step in machine le...
research
02/06/2019

Fast Hyperparameter Tuning using Bayesian Optimization with Directional Derivatives

In this paper we develop a Bayesian optimization based hyperparameter tu...
research
05/04/2020

Cost Effective Optimization for Cost-related Hyperparameters

The increasing demand for democratizing machine learning algorithms for ...
research
06/13/2023

Tune As You Scale: Hyperparameter Optimization For Compute Efficient Training

Hyperparameter tuning of deep learning models can lead to order-of-magni...
research
01/04/2021

HyperMorph: Amortized Hyperparameter Learning for Image Registration

We present HyperMorph, a learning-based strategy for deformable image re...
research
04/14/2023

Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales

As language models scale up, it becomes increasingly expensive to verify...
research
04/26/2021

Efficient Hyperparameter Optimization for Physics-based Character Animation

Physics-based character animation has seen significant advances in recen...

Please sign up or login with your details

Forgot password? Click here to reset