Explanation of Machine Learning Models Using Shapley Additive Explanation and Application for Real Data in Hospital

12/21/2021
by   Yasunobu Nohara, et al.
0

When using machine learning techniques in decision-making processes, the interpretability of the models is important. In the present paper, we adopted the Shapley additive explanation (SHAP), which is based on fair profit allocation among many stakeholders depending on their contribution, for interpreting a gradient-boosting decision tree model using hospital data. For better interpretability, we propose two novel techniques as follows: (1) a new metric of feature importance using SHAP and (2) a technique termed feature packing, which packs multiple similar features into one grouped feature to allow an easier understanding of the model without reconstruction of the model. We then compared the explanation results between the SHAP framework and existing methods. In addition, we showed how the A/G ratio works as an important prognostic factor for cerebral infarction using our hospital data and proposed techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/05/2022

Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects

When using machine learning techniques in decision-making processes, the...
research
02/02/2020

Interpretability of Blackbox Machine Learning Models through Dataview Extraction and Shadow Model creation

Deep learning models trained using massive amounts of data tend to captu...
research
06/09/2021

On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation

In the recent advances of natural language processing, the scale of the ...
research
07/11/2020

Feature Interactions in XGBoost

In this paper, we investigate how feature interactions can be identified...
research
01/19/2023

Identification, explanation and clinical evaluation of hospital patient subtypes

We present a pipeline in which unsupervised machine learning techniques ...
research
07/08/2020

Just in Time: Personal Temporal Insights for Altering Model Decisions

The interpretability of complex Machine Learning models is coming to be ...

Please sign up or login with your details

Forgot password? Click here to reset