Variable importance without impossible data

05/31/2022
by   Masayoshi Mase, et al.
0

The most popular methods for measuring importance of the variables in a black box prediction algorithm make use of synthetic inputs that combine predictor variables from multiple subjects. These inputs can be unlikely, physically impossible, or even logically impossible. As a result, the predictions for such cases can be based on data very unlike any the black box was trained on. We think that users cannot trust an explanation of the decision of a prediction algorithm when the explanation uses such values. Instead we advocate a method called Cohort Shapley that is grounded in economic game theory and unlike most other game theoretic methods, it uses only actually observed data to quantify variable importance. Cohort Shapley works by narrowing the cohort of subjects judged to be similar to a target subject on one or more features. A feature is important if using it to narrow the cohort makes a large difference to the cohort mean. We illustrate it on an algorithmic fairness problem where it is essential to attribute importance to protected variables that the model was not trained on. For every subject and every predictor variable, we can compute the importance of that predictor to the subject's predicted response or to their actual response. These values can be aggregated, for example over all Black subjects, and we propose a Bayesian bootstrap to quantify uncertainty in both individual and aggregate Shapley values.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2021

Cohort Shapley value for algorithmic fairness

Cohort Shapley value is a model-free method of variable importance groun...
research
11/01/2019

Explaining black box decisions by Shapley cohort refinement

We introduce a variable importance measure to explain the importance of ...
research
12/21/2018

Example and Feature importance-based Explanations for Black-box Machine Learning Models

As machine learning models become more accurate, they typically become m...
research
05/17/2021

What makes you unique?

This paper proposes a uniqueness Shapley measure to compare the extent t...
research
08/26/2022

PDD-SHAP: Fast Approximations for Shapley Values using Functional Decomposition

Because of their strong theoretical properties, Shapley values have beco...
research
06/26/2023

PWSHAP: A Path-Wise Explanation Model for Targeted Variables

Predictive black-box models can exhibit high accuracy but their opaque n...
research
01/15/2023

Rationalizing Predictions by Adversarial Information Calibration

Explaining the predictions of AI models is paramount in safety-critical ...

Please sign up or login with your details

Forgot password? Click here to reset