Shapley Chains: Extending Shapley Values to Classifier Chains

03/30/2023
by   Célia Wafa Ayad, et al.
0

In spite of increased attention on explainable machine learning models, explaining multi-output predictions has not yet been extensively addressed. Methods that use Shapley values to attribute feature contributions to the decision making are one of the most popular approaches to explain local individual and global predictions. By considering each output separately in multi-output tasks, these methods fail to provide complete feature explanations. We propose Shapley Chains to overcome this issue by including label interdependencies in the explanation design process. Shapley Chains assign Shapley values as feature importance scores in multi-output classification using classifier chains, by separating the direct and indirect influence of these feature scores. Compared to existing methods, this approach allows to attribute a more complete feature contribution to the predictions of multi-output classification tasks. We provide a mechanism to distribute the hidden contributions of the outputs with respect to a given chaining order of these outputs. Moreover, we show how our approach can reveal indirect feature contributions missed by existing approaches. Shapley Chains help to emphasize the real learning factors in multi-output applications and allows a better understanding of the flow of information through output interdependencies in synthetic and real-world datasets.

READ FULL TEXT
research
07/18/2019

Probabilistic Regressor Chains with Monte Carlo Methods

A large number and diversity of techniques have been offered in the lite...
research
06/07/2019

Rectifying Classifier Chains for Multi-Label Classification

Classifier chains have recently been proposed as an appealing method for...
research
06/16/2022

Quantifying Feature Contributions to Overall Disparity Using Information Theory

When a machine-learning algorithm makes biased decisions, it can be help...
research
11/03/2020

Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models

Shapley values underlie one of the most popular model-agnostic methods w...
research
10/15/2021

NeuroView: Explainable Deep Network Decision Making

Deep neural networks (DNs) provide superhuman performance in numerous co...
research
06/20/2019

Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

Motivated by the need to audit complex and black box models, there has b...
research
07/19/2023

Beyond Single-Feature Importance with ICECREAM

Which set of features was responsible for a certain output of a machine ...

Please sign up or login with your details

Forgot password? Click here to reset