A k-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning

11/03/2022
by   Guilherme Dean Pelegrina, et al.
0

Besides accuracy, recent studies on machine learning models have been addressing the question on how the obtained results can be interpreted. Indeed, while complex machine learning models are able to provide very good results in terms of accuracy even in challenging applications, it is difficult to interpret them. Aiming at providing some interpretability for such models, one of the most famous methods, called SHAP, borrows the Shapley value concept from game theory in order to locally explain the predicted outcome of an instance of interest. As the SHAP values calculation needs previous computations on all possible coalitions of attributes, its computational cost can be very high. Therefore, a SHAP-based method called Kernel SHAP adopts an efficient strategy that approximate such values with less computational effort. In this paper, we also address local interpretability in machine learning based on Shapley values. Firstly, we provide a straightforward formulation of a SHAP-based method for local interpretability by using the Choquet integral, which leads to both Shapley values and Shapley interaction indices. Moreover, we also adopt the concept of k-additive games from game theory, which contributes to reduce the computational effort when estimating the SHAP values. The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.

READ FULL TEXT

page 22

page 24

research
08/05/2022

Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects

When using machine learning techniques in decision-making processes, the...
research
06/30/2022

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

Machine learning (ML) interpretability techniques can reveal undesirable...
research
09/17/2019

The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory

Recently, a number of techniques have been proposed to explain a machine...
research
06/04/2019

An interpretable machine learning framework for modelling human decision behavior

Machine learning has recently been widely adopted to address the manager...
research
02/25/2021

On Interpretability and Similarity in Concept-Based Machine Learning

Machine Learning (ML) provides important techniques for classification a...
research
10/18/2021

RKHS-SHAP: Shapley Values for Kernel Methods

Feature attribution for kernel methods is often heuristic and not indivi...
research
09/16/2023

Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs

Shapley value is originally a concept in econometrics to fairly distribu...

Please sign up or login with your details

Forgot password? Click here to reset