Understanding Global Feature Contributions Through Additive Importance Measures

04/01/2020
by   Ian Covert, et al.
0

Understanding the inner workings of complex machine learning models is a long-standing problem, with recent research focusing primarily on local interpretability. To assess the role of individual input features in a global sense, we propose a new feature importance method, Shapley Additive Global importancE (SAGE), a model-agnostic measure of feature importance based on the predictive power associated with each feature. SAGE relates to prior work through the novel framework of additive importance measures, a perspective that unifies numerous other feature importance methods and shows that only SAGE properly accounts for complex feature interactions. We define SAGE using the Shapley value from cooperative game theory, which leads to numerous intuitive and desirable properties. Our experiments apply SAGE to eight datasets, including MNIST and breast cancer subtype classification, and demonstrate its advantages through quantitative and qualitative evaluations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2018

Visualizing the Feature Importance for Black Box Models

In recent years, a large amount of model-agnostic methods to improve the...
research
05/22/2017

A Unified Approach to Interpreting Model Predictions

Understanding why a model makes a certain prediction can be as crucial a...
research
07/04/2023

MDI+: A Flexible Random Forest-Based Feature Importance Framework

Mean decrease in impurity (MDI) is a popular feature importance measure ...
research
04/06/2023

Efficient SAGE Estimation via Causal Structure Learning

The Shapley Additive Global Importance (SAGE) value is a theoretically a...
research
02/03/2023

A Simple Approach for Local and Global Variable Importance in Nonlinear Regression Models

The ability to interpret machine learning models has become increasingly...
research
06/08/2020

X-SHAP: towards multiplicative explainability of Machine Learning

This paper introduces X-SHAP, a model-agnostic method that assesses mult...
research
10/12/2019

Measuring Unfairness through Game-Theoretic Interpretability

One often finds in the literature connections between measures of fairne...

Please sign up or login with your details

Forgot password? Click here to reset