On the Tractability of SHAP Explanations

09/18/2020
by   Guy Van den Broeck, et al.
12

SHAP explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether SHAP explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the SHAP explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the SHAP explanation is the same as the complexity of computing the expected value of the model. This fully-factorized setting is often used to simplify the SHAP computation, yet our results show that the computation can be intractable for commonly used models such as logistic regression. Going beyond fully-factorized distributions, we show that computing SHAP explanations is already intractable for a very simple setting: computing SHAP explanations of trivial classifiers over naive Bayes distributions. Finally, we show that even computing SHAP over the empirical distribution is #P-hard.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2022

On Computing Relevant Features for Explaining NBCs

Despite the progress observed with model-agnostic explainable AI (XAI), ...
research
03/05/2019

What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features

While discriminative classifiers often yield strong predictive performan...
research
08/13/2020

Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay

Recent work proposed the computation of so-called PI-explanations of Nai...
research
09/29/2020

The Shapley Value of Inconsistency Measures for Functional Dependencies

Quantifying the inconsistency of a database is motivated by various goal...
research
12/12/2022

On Computing Probabilistic Abductive Explanations

The most widely studied explainable AI (XAI) approaches are unsound. Thi...
research
06/09/2022

A Learning-Theoretic Framework for Certified Auditing of Machine Learning Models

Responsible use of machine learning requires that models be audited for ...
research
10/13/2020

Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter

Understanding predictions made by Machine Learning models is critical in...

Please sign up or login with your details

Forgot password? Click here to reset