Performance and Interpretability Comparisons of Supervised Machine Learning Algorithms: An Empirical Study

04/27/2022
by   Alice J. Liu, et al.
9

This paper compares the performances of three supervised machine learning algorithms in terms of predictive ability and model interpretation on structured or tabular data. The algorithms considered were scikit-learn implementations of extreme gradient boosting machines (XGB) and random forests (RFs), and feedforward neural networks (FFNNs) from TensorFlow. The paper is organized in a findings-based manner, with each section providing general conclusions supported by empirical results from simulation studies that cover a wide range of model complexity and correlation structures among predictors. We considered both continuous and binary responses of different sample sizes. Overall, XGB and FFNNs were competitive, with FFNNs showing better performance in smooth models and tree-based boosting algorithms performing better in non-smooth models. This conclusion held generally for predictive performance, identification of important variables, and determining correct input-output relationships as measured by partial dependence plots (PDPs). FFNNs generally had less over-fitting, as measured by the difference in performance between training and testing datasets. However, the difference with XGB was often small. RFs did not perform well in general, confirming the findings in the literature. All models exhibited different degrees of bias seen in PDPs, but the bias was especially problematic for RFs. The extent of the biases varied with correlation among predictors, response type, and data set sample size. In general, tree-based models tended to over-regularize the fitted model in the tails of predictor distributions. Finally, as to be expected, performances were better for continuous responses compared to binary data and with larger samples.

READ FULL TEXT

page 19

page 27

page 34

research
07/28/2020

Surrogate Locally-Interpretable Models with Supervised Machine Learning Algorithms

Supervised Machine Learning (SML) algorithms, such as Gradient Boosting,...
research
08/31/2021

Look Who's Talking: Interpretable Machine Learning for Assessing Italian SMEs Credit Default

Academic research and the financial industry have recently paid great at...
research
06/02/2018

Locally Interpretable Models and Effects based on Supervised Partitioning (LIME-SUP)

Supervised Machine Learning (SML) algorithms such as Gradient Boosting, ...
research
11/15/2022

Behavior of Hyper-Parameters for Selected Machine Learning Algorithms: An Empirical Investigation

Hyper-parameters (HPs) are an important part of machine learning (ML) mo...
research
06/30/2022

Prediction of Dilatory Behavior in eLearning: A Comparison of Multiple Machine Learning Models

Procrastination, the irrational delay of tasks, is a common occurrence i...
research
08/10/2018

BooST: Boosting Smooth Trees for Partial Effect Estimation in Nonlinear Regressions

In this paper we introduce a new machine learning (ML) model for nonline...
research
12/08/2019

VAT tax gap prediction: a 2-steps Gradient Boosting approach

Tax evasion is the illegal non-payment of taxes by individuals, corporat...

Please sign up or login with your details

Forgot password? Click here to reset