Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT)

06/15/2021
by   Gunnar König, et al.
3

Global model-agnostic feature importance measures either quantify whether features are directly used for a model's predictions (direct importance) or whether they contain prediction-relevant information (associative importance). Direct importance provides causal insight into the model's mechanism, yet it fails to expose the leakage of information from associated but not directly used variables. In contrast, associative importance exposes information leakage but does not provide causal insight into the model's mechanism. We introduce DEDACT - a framework to decompose well-established direct and associative importance measures into their respective associative and direct components. DEDACT provides insight into both the sources of prediction-relevant information in the data and the direct and indirect feature pathways by which the information enters the model. We demonstrate the method's usefulness on simulated examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2020

Relative Feature Importance

Interpretable Machine Learning (IML) methods are used to gain insight in...
research
10/01/2019

Randomized Ablation Feature Importance

Given a model f that predicts a target y from a vector of input features...
research
12/22/2019

Direct and Indirect Effects – An Information Theoretic Perspective

Information theoretic (IT) approaches to quantifying causal influences h...
research
03/24/2023

Learning Causal Attributions in Neural Networks: Beyond Direct Effects

There has been a growing interest in capturing and maintaining causal re...
research
08/12/2022

Unifying local and global model explanations by functional decomposition of low dimensional structures

We consider a global explanation of a regression or classification funct...
research
02/06/2018

Granger-causal Attentive Mixtures of Experts

Several methods have recently been proposed to detect salient input feat...
research
12/10/2020

On Shapley Credit Allocation for Interpretability

We emphasize the importance of asking the right question when interpreti...

Please sign up or login with your details

Forgot password? Click here to reset