A Refutation of Shapley Values for Explainability

09/06/2023
by   Xuanxiang Huang, et al.
0

Recent work demonstrated the existence of Boolean functions for which Shapley values provide misleading information about the relative importance of features in rule-based explanations. Such misleading information was broadly categorized into a number of possible issues. Each of those issues relates with features being relevant or irrelevant for a prediction, and all are significant regarding the inadequacy of Shapley values for rule-based explainability. This earlier work devised a brute-force approach to identify Boolean functions, defined on small numbers of features, and also associated instances, which displayed such inadequacy-revealing issues, and so served as evidence to the inadequacy of Shapley values for rule-based explainability. However, an outstanding question is how frequently such inadequacy-revealing issues can occur for Boolean functions with arbitrary large numbers of features. It is plain that a brute-force approach would be unlikely to provide insights on how to tackle this question. This paper answers the above question by proving that, for any number of features, there exist Boolean functions that exhibit one or more inadequacy-revealing issues, thereby contributing decisive arguments against the use of Shapley values as the theoretical underpinning of feature-attribution methods in explainability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2023

The Inadequacy of Shapley Values for Explainability

This paper develops a rigorous argument for why the use of Shapley value...
research
12/14/2021

Do Answers to Boolean Questions Need Explanations? Yes

Existing datasets that contain boolean questions, such as BoolQ and TYDI...
research
08/01/2023

Descriptive complexity for neural networks via Boolean networks

We investigate the descriptive complexity of a class of neural networks ...
research
06/01/2020

Shapley-based explainability on the data manifold

Explainability in machine learning is crucial for iterative model develo...
research
07/12/2022

Investigating the Impact of Independent Rule Fitnesses in a Learning Classifier System

Achieving at least some level of explainability requires complex analyse...
research
02/14/2022

Measurably Stronger Explanation Reliability via Model Canonization

While rule-based attribution methods have proven useful for providing lo...
research
03/19/2023

Studying Limits of Explainability by Integrated Gradients for Gene Expression Models

Understanding the molecular processes that drive cellular life is a fund...

Please sign up or login with your details

Forgot password? Click here to reset