From Shapley back to Pearson: Hypothesis Testing via the Shapley Value

07/14/2022
by   Jacopo Teneggi, et al.
13

Machine learning models, in particular artificial neural networks, are increasingly used to inform decision making in high-stakes scenarios across a variety of fields–from financial services, to public safety, and healthcare. While neural networks have achieved remarkable performance in many settings, their complex nature raises concerns on their reliability, trustworthiness, and fairness in real-world scenarios. As a result, several a-posteriori explanation methods have been proposed to highlight the features that influence a model's prediction. Notably, the Shapley value–a game theoretic quantity that satisfies several desirable properties–has gained popularity in the machine learning explainability literature. More traditionally, however, feature importance in statistical learning has been formalized by conditional independence, and a standard way to test for it is via Conditional Randomization Tests (CRTs). So far, these two perspectives on interpretability and feature importance have been considered distinct and separate. In this work, we show that Shapley-based explanation methods and conditional independence testing for feature importance are closely related. More precisely, we prove that evaluating a Shapley coefficient amounts to performing a specific set of conditional independence tests, as implemented by a procedure similar to the CRT but for a different null hypothesis. Furthermore, the obtained game-theoretic values upper bound the p-values of such tests. As a result, we grant large Shapley coefficients with a precise statistical sense of importance with controlled type I error.

READ FULL TEXT
research
10/01/2022

Model-Free Sequential Testing for Conditional Independence via Testing by Betting

This paper develops a model-free sequential test for conditional indepen...
research
02/25/2020

Problems with Shapley-value-based explanations as feature importance measures

Game-theoretic formulations of feature importance have become popular as...
research
06/16/2013

Bayesian test of significance for conditional independence: The multinomial model

Conditional independence tests (CI tests) have received special attentio...
research
07/03/2022

Learning to Increase the Power of Conditional Randomization Tests

The model-X conditional randomization test is a generic framework for co...
research
07/12/2021

Rate-Exponent Region for a Class of Distributed Hypothesis Testing Against Conditional Independence Problems

We study a class of K-encoder hypothesis testing against conditional ind...
research
02/02/2023

Hypothesis Testing and Machine Learning: Interpreting Variable Effects in Deep Artificial Neural Networks using Cohen's f2

Deep artificial neural networks show high predictive performance in many...
research
01/20/2020

Fundamental Limits of Testing the Independence of Irrelevant Alternatives in Discrete Choice

The Multinomial Logit (MNL) model and the axiom it satisfies, the Indepe...

Please sign up or login with your details

Forgot password? Click here to reset