Learning to Tune XGBoost with XGBoost

09/16/2019
by   Johanna Sommer, et al.
0

In this short paper we investigate whether meta-learning techniques can be used to more effectively tune the hyperparameters of machine learning models using successive halving (SH). We propose a novel variant of the SH algorithm (MeSH), that uses meta-regressors to determine which candidate configurations should be eliminated at each round. We apply MeSH to the problem of tuning the hyperparameters of a gradient-boosted decision tree model. By training and tuning our metaregressors using existing tuning jobs from 95 datasets, we demonstrate that MeSH can often find a superior solution to both SH and random search.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2019

Alpha MAML: Adaptive Model-Agnostic Meta-Learning

Model-agnostic meta-learning (MAML) is a meta-learning technique to trai...
research
06/04/2019

A meta-learning recommender system for hyperparameter tuning: predicting when tuning improves SVM classifiers

For many machine learning algorithms, predictive performance is critical...
research
07/07/2014

Recommending Learning Algorithms and Their Associated Hyperparameters

The success of machine learning on a given task dependson, among other t...
research
06/13/2023

Tune As You Scale: Hyperparameter Optimization For Compute Efficient Training

Hyperparameter tuning of deep learning models can lead to order-of-magni...
research
12/05/2018

An empirical study on hyperparameter tuning of decision trees

Machine learning algorithms often contain many hyperparameters whose val...
research
03/03/2023

EZtune: A Package for Automated Hyperparameter Tuning in R

Statistical learning models have been growing in popularity in recent ye...
research
10/26/2016

A self-tuning Firefly algorithm to tune the parameters of Ant Colony System (ACSFA)

Ant colony system (ACS) is a promising approach which has been widely us...

Please sign up or login with your details

Forgot password? Click here to reset