groupShapley: Efficient prediction explanation with Shapley values for feature groups

06/23/2021
by   Martin Jullum, et al.
0

Shapley values has established itself as one of the most appropriate and theoretically sound frameworks for explaining predictions from complex machine learning models. The popularity of Shapley values in the explanation setting is probably due to its unique theoretical properties. The main drawback with Shapley values, however, is that its computational complexity grows exponentially in the number of input features, making it unfeasible in many real world situations where there could be hundreds or thousands of features. Furthermore, with many (dependent) features, presenting/visualizing and interpreting the computed Shapley values also becomes challenging. The present paper introduces groupShapley: a conceptually simple approach for dealing with the aforementioned bottlenecks. The idea is to group the features, for example by type or dependence, and then compute and present Shapley values for these groups instead of for all individual features. Reducing hundreds or thousands of features to half a dozen or so, makes precise computations practically feasible and the presentation and knowledge extraction greatly simplified. We prove that under certain conditions, groupShapley is equivalent to summing the feature-wise Shapley values within each feature group. Moreover, we provide a simulation study exemplifying the differences when these conditions are not met. We illustrate the usability of the approach in a real world car insurance example, where groupShapley is used to provide simple and intuitive explanations.

READ FULL TEXT
research
03/25/2019

Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

Explaining complex or seemingly simple machine learning models is a prac...
research
06/19/2023

Explaining the Model and Feature Dependencies by Decomposition of the Shapley Value

Shapley values have become one of the go-to methods to explain complex m...
research
04/14/2023

Grouping Shapley Value Feature Importances of Random Forests for explainable Yield Prediction

Explainability in yield prediction helps us fully explore the potential ...
research
11/26/2021

Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features

Shapley values are today extensively used as a model-agnostic explanatio...
research
08/09/2021

Improved Feature Importance Computations for Tree Models: Shapley vs. Banzhaf

Shapley values are one of the main tools used to explain predictions of ...
research
01/23/2023

Feature construction using explanations of individual predictions

Feature construction can contribute to comprehensibility and performance...
research
03/03/2020

Explaining Groups of Points in Low-Dimensional Representations

A common workflow in data exploration is to learn a low-dimensional repr...

Please sign up or login with your details

Forgot password? Click here to reset