Marginal Contribution Feature Importance – an Axiomatic Approach for The Natural Case

10/15/2020
by   Amnon Catav, et al.
0

When training a predictive model over medical data, the goal is sometimes to gain insights about a certain disease. In such cases, it is common to use feature importance as a tool to highlight significant factors contributing to that disease. As there are many existing methods for computing feature importance scores, understanding their relative merits is not trivial. Further, the diversity of scenarios in which they are used lead to different expectations from the feature importance scores. While it is common to make the distinction between local scores that focus on individual predictions and global scores that look at the contribution of a feature to the model, another important division distinguishes model scenarios, in which the goal is to understand predictions of a given model from natural scenarios, in which the goal is to understand a phenomenon such as a disease. We develop a set of axioms that represent the properties expected from a feature importance function in the natural scenario and prove that there exists only one function that satisfies all of them, the Marginal Contribution Feature Importance (MCI). We analyze this function for its theoretical and empirical properties and compare it to other feature importance scores. While our focus is the natural scenario, we suggest that our axiomatic approach could be carried out in other scenarios too.

READ FULL TEXT
research
04/18/2018

Visualizing the Feature Importance for Black Box Models

In recent years, a large amount of model-agnostic methods to improve the...
research
06/16/2022

Inherent Inconsistencies of Feature Importance

The black-box nature of modern machine learning techniques invokes a pra...
research
04/21/2022

Ultra-marginal Feature Importance

Scientists frequently prioritize learning from data rather than training...
research
11/22/2016

An unexpected unity among methods for interpreting model predictions

Understanding why a model made a certain prediction is crucial in many d...
research
07/16/2020

Relative Feature Importance

Interpretable Machine Learning (IML) methods are used to gain insight in...
research
08/08/2018

L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data

We study instancewise feature importance scoring as a method for model i...
research
10/05/2021

Feature Selection by a Mechanism Design

In constructing an econometric or statistical model, we pick relevant fe...

Please sign up or login with your details

Forgot password? Click here to reset