Inherent Inconsistencies of Feature Importance

06/16/2022
by   Nimrod Harel, et al.
0

The black-box nature of modern machine learning techniques invokes a practical and ethical need for explainability. Feature importance aims to meet this need by assigning scores to features, so humans can understand their influence on predictions. Feature importance can be used to explain predictions under different settings: of the entire sample space or a specific instance; of model behavior, or the dependencies in the data themselves. However, in most cases thus far, each of these settings was studied in isolation. We attempt to develop a sound feature importance score framework by defining a small set of desired properties. Surprisingly, we prove an inconsistency theorem, showing that the expected properties cannot hold simultaneously. To overcome this difficulty, we propose the novel notion of re-partitioning the feature space into separable sets. Such sets are constructed to contain features that exhibit inter-set independence with respect to the target variable. We show that there exists a unique maximal partitioning into separable sets. Moreover, assigning scores to separable sets, instead of single features, unifies the results of commonly used feature importance scores and annihilates the inconsistencies we demonstrated.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2021

Comparing interpretability and explainability for feature selection

A common approach for feature selection is to examine the variable impor...
research
10/15/2020

Marginal Contribution Feature Importance – an Axiomatic Approach for The Natural Case

When training a predictive model over medical data, the goal is sometime...
research
02/23/2021

Feature Importance Explanations for Temporal Black-Box Models

Models in the supervised learning framework may capture rich and complex...
research
10/02/2022

Ensembling improves stability and power of feature selection for deep learning models

With the growing adoption of deep learning models in different real-worl...
research
04/29/2022

A Framework for Constructing Machine Learning Models with Feature Set Optimisation for Evapotranspiration Partitioning

A deeper understanding of the drivers of evapotranspiration and the mode...
research
07/16/2020

Relative Feature Importance

Interpretable Machine Learning (IML) methods are used to gain insight in...

Please sign up or login with your details

Forgot password? Click here to reset