A XGBoost risk model via feature selection and Bayesian hyper-parameter optimization

by   Yan Wang, et al.

This paper aims to explore models based on the extreme gradient boosting (XGBoost) approach for business risk classification. Feature selection (FS) algorithms and hyper-parameter optimizations are simultaneously considered during model training. The five most commonly used FS methods including weight by Gini, weight by Chi-square, hierarchical variable clustering, weight by correlation, and weight by information are applied to alleviate the effect of redundant features. Two hyper-parameter optimization approaches, random search (RS) and Bayesian tree-structured Parzen Estimator (TPE), are applied in XGBoost. The effect of different FS and hyper-parameter optimization methods on the model performance are investigated by the Wilcoxon Signed Rank Test. The performance of XGBoost is compared to the traditionally utilized logistic regression (LR) model in terms of classification accuracy, area under the curve (AUC), recall, and F1 score obtained from the 10-fold cross validation. Results show that hierarchical clustering is the optimal FS method for LR while weight by Chi-square achieves the best performance in XG-Boost. Both TPE and RS optimization in XGBoost outperform LR significantly. TPE optimization shows a superiority over RS since it results in a significantly higher accuracy and a marginally higher AUC, recall and F1 score. Furthermore, XGBoost with TPE tuning shows a lower variability than the RS method. Finally, the ranking of feature importance based on XGBoost enhances the model interpretation. Therefore, XGBoost with Bayesian TPE hyper-parameter optimization serves as an operative while powerful approach for business risk modeling.



page 9

page 12

page 17


Developing and Improving Risk Models using Machine-learning Based Algorithms

The objective of this study is to develop a good risk model for classify...

Predicting class-imbalanced business risk using resampling, regularization, and model ensembling algorithms

We aim at developing and improving the imbalanced business risk modeling...

On the discriminative power of Hyper-parameters in Cross-Validation and how to choose them

Hyper-parameters tuning is a crucial task to make a model perform at its...

A Comparative Evaluation Of Transformer Models For De-Identification Of Clinical Text Data

Objective: To comparatively evaluate several transformer model architect...

Optimising HEP parameter fits via Monte Carlo weight derivative regression

HEP event selection is traditionally considered a binary classification ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.