Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects

08/05/2022
by   Yasunobu Nohara, et al.
0

When using machine learning techniques in decision-making processes, the interpretability of the models is important. Shapley additive explanation (SHAP) is one of the most promising interpretation methods for machine learning models. Interaction effects occur when the effect of one variable depends on the value of another variable. Even if each variable has little effect on the outcome, its combination can have an unexpectedly large impact on the outcome. Understanding interactions is important for understanding machine learning models; however, naive SHAP analysis cannot distinguish between the main effect and interaction effects. In this paper, we introduce the Shapley-Taylor index as an interpretation method for machine learning models using SHAP considering interaction effects. We apply the method to the cancer cohort data of Kyushu University Hospital (N=29,080) to analyze what combination of factors contributes to the risk of colon cancer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/21/2021

Explanation of Machine Learning Models Using Shapley Additive Explanation and Application for Real Data in Hospital

When using machine learning techniques in decision-making processes, the...
research
11/03/2022

A k-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning

Besides accuracy, recent studies on machine learning models have been ad...
research
08/09/2021

Visualizing Variable Importance and Variable Interaction Effects in Machine Learning Models

Variable importance, interaction measures, and partial dependence plots ...
research
10/20/2022

vivid: An R package for Variable Importance and Variable Interactions Displays for Machine Learning Models

We present vivid, an R package for visualizing variable importance and v...
research
04/15/2023

The XAISuite framework and the implications of explanatory system dissonance

Explanatory systems make machine learning models more transparent. Howev...
research
04/11/2020

Explaining the Relationship between Internet and Democracy in Partly Free Countries Using Machine Learning Models

Previous studies have offered a variety of explanations on the relations...
research
06/09/2020

Predicting and Analyzing Law-Making in Kenya

Modelling and analyzing parliamentary legislation, roll-call votes and o...

Please sign up or login with your details

Forgot password? Click here to reset