A Modified Bayesian Optimization based Hyper-Parameter Tuning Approach for Extreme Gradient Boosting

04/10/2020
by   Sayan Putatunda, et al.
0

It is already reported in the literature that the performance of a machine learning algorithm is greatly impacted by performing proper Hyper-Parameter optimization. One of the ways to perform Hyper-Parameter optimization is by manual search but that is time consuming. Some of the common approaches for performing Hyper-Parameter optimization are Grid search Random search and Bayesian optimization using Hyperopt. In this paper, we propose a brand new approach for hyperparameter improvement i.e. Randomized-Hyperopt and then tune the hyperparameters of the XGBoost i.e. the Extreme Gradient Boosting algorithm on ten datasets by applying Random search, Randomized-Hyperopt, Hyperopt and Grid Search. The performances of each of these four techniques were compared by taking both the prediction accuracy and the execution time into consideration. We find that the Randomized-Hyperopt performs better than the other three conventional methods for hyper-paramter optimization of XGBoost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2020

Automatic Setting of DNN Hyper-Parameters by Mixing Bayesian Optimization and Tuning Rules

Deep learning techniques play an increasingly important role in industri...
research
05/26/2023

Benchmarking state-of-the-art gradient boosting algorithms for classification

This work explores the use of gradient boosting in the context of classi...
research
09/12/2018

Benchmarking and Optimization of Gradient Boosted Decision Tree Algorithms

Gradient boosted decision trees (GBDTs) have seen widespread adoption in...
research
06/22/2020

Hippo: Taming Hyper-parameter Optimization of Deep Learning with Stage Trees

Hyper-parameter optimization is crucial for pushing the accuracy of a de...
research
08/05/2021

HyperJump: Accelerating HyperBand via Risk Modelling

In the literature on hyper-parameter tuning, a number of recent solution...
research
05/31/2019

Cascaded Algorithm-Selection and Hyper-Parameter Optimization with Extreme-Region Upper Confidence Bound Bandit

An automatic machine learning (AutoML) task is to select the best algori...

Please sign up or login with your details

Forgot password? Click here to reset