Relative Feature Importance

07/16/2020
by   Gunnar König, et al.
0

Interpretable Machine Learning (IML) methods are used to gain insight into the relevance of a feature of interest for the performance of a model. Commonly used IML methods differ in whether they consider features of interest in isolation, e.g., Permutation Feature Importance (PFI), or in relation to all remaining feature variables, e.g., Conditional Feature Importance (CFI). As such, the perturbation mechanisms inherent to PFI and CFI represent extreme reference points. We introduce Relative Feature Importance (RFI), a generalization of PFI and CFI that allows for a more nuanced feature importance computation beyond the PFI versus CFI dichotomy. With RFI, the importance of a feature relative to any other subset of features can be assessed, including variables that were not available at training time. We derive general interpretation rules for RFI based on a detailed theoretical analysis of the implications of relative feature relevance, and demonstrate the method's usefulness on simulated examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2021

Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT)

Global model-agnostic feature importance measures either quantify whethe...
research
02/04/2022

The impact of feature importance methods on the interpretation of defect classifiers

Classifier specific (CS) and classifier agnostic (CA) feature importance...
research
04/23/2021

Grouped Feature Importance and Combined Features Effect Plot

Interpretable machine learning has become a very active area of research...
research
11/10/2021

Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature Semantics

Interpretability is becoming an active research topic as machine learnin...
research
10/18/2019

Many Faces of Feature Importance: Comparing Built-in and Post-hoc Feature Importance in Text Classification

Feature importance is commonly used to explain machine predictions. Whil...
research
10/15/2020

Marginal Contribution Feature Importance – an Axiomatic Approach for The Natural Case

When training a predictive model over medical data, the goal is sometime...
research
06/16/2022

Inherent Inconsistencies of Feature Importance

The black-box nature of modern machine learning techniques invokes a pra...

Please sign up or login with your details

Forgot password? Click here to reset