Rethinking Defaults Values: a Low Cost and Efficient Strategy to Define Hyperparameters

07/31/2020
by   Rafael Gomes Mantovani, et al.
0

Machine Learning (ML) algorithms have been successfully employed by a vast range of practitioners with different backgrounds. One of the reasons for ML popularity is the capability to consistently delivers accurate results, which can be further boosted by adjusting hyperparameters (HP). However, part of practitioners has limited knowledge about the algorithms and does not take advantage of suitable HP settings. In general, HP values are defined by trial and error, tuning, or by using default values. Trial and error is very subjective, time costly and dependent on the user experience. Tuning techniques search for HP values able to maximize the predictive performance of induced models for a given dataset, but with the drawback of a high computational cost and target specificity. To avoid tuning costs, practitioners use default values suggested by the algorithm developer or by tools implementing the algorithm. Although default values usually result in models with acceptable predictive performance, different implementations of the same algorithm can suggest distinct default values. To maintain a balance between tuning and using default values, we propose a strategy to generate new optimized default values. Our approach is grounded on a small set of optimized values able to obtain predictive performance values better than default settings provided by popular tools. The HP candidates are estimated through a pool of promising values tuned from a small and informative set of datasets. After performing a large experiment and a careful analysis of the results, we concluded that our approach delivers better default values. Besides, it leads to competitive solutions when compared with the use of tuned values, being easier to use and having a lower cost.Based on our results, we also extracted simple rules to guide practitioners in deciding whether using our new methodology or a tuning approach.

READ FULL TEXT
research
06/04/2019

A meta-learning recommender system for hyperparameter tuning: predicting when tuning improves SVM classifiers

For many machine learning algorithms, predictive performance is critical...
research
12/05/2018

An empirical study on hyperparameter tuning of decision trees

Machine learning algorithms often contain many hyperparameters whose val...
research
07/13/2022

High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms

Hyperparameters in machine learning (ML) have received a fair amount of ...
research
09/25/2022

Feature Encodings for Gradient Boosting with Automunge

Selecting a default feature encoding strategy for gradient boosted learn...
research
03/12/2019

Efficient Optimization of Echo State Networks for Time Series Datasets

Echo State Networks (ESNs) are recurrent neural networks that only train...
research
09/12/2021

Automatic Componentwise Boosting: An Interpretable AutoML System

In practice, machine learning (ML) workflows require various different s...
research
07/16/2023

MindOpt Tuner: Boost the Performance of Numerical Software by Automatic Parameter Tuning

Numerical software is usually shipped with built-in hyperparameters. By ...

Please sign up or login with your details

Forgot password? Click here to reset