DeepAI
Log In Sign Up

Understanding Global Feature Contributions Through Additive Importance Measures

04/01/2020
by   Ian Covert, et al.
0

Understanding the inner workings of complex machine learning models is a long-standing problem, with recent research focusing primarily on local interpretability. To assess the role of individual input features in a global sense, we propose a new feature importance method, Shapley Additive Global importancE (SAGE), a model-agnostic measure of feature importance based on the predictive power associated with each feature. SAGE relates to prior work through the novel framework of additive importance measures, a perspective that unifies numerous other feature importance methods and shows that only SAGE properly accounts for complex feature interactions. We define SAGE using the Shapley value from cooperative game theory, which leads to numerous intuitive and desirable properties. Our experiments apply SAGE to eight datasets, including MNIST and breast cancer subtype classification, and demonstrate its advantages through quantitative and qualitative evaluations.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/18/2018

Visualizing the Feature Importance for Black Box Models

In recent years, a large amount of model-agnostic methods to improve the...
05/22/2017

A Unified Approach to Interpreting Model Predictions

Understanding why a model makes a certain prediction can be as crucial a...
10/12/2019

Measuring Unfairness through Game-Theoretic Interpretability

One often finds in the literature connections between measures of fairne...
06/08/2020

X-SHAP: towards multiplicative explainability of Machine Learning

This paper introduces X-SHAP, a model-agnostic method that assesses mult...
10/06/2022

Conditional Feature Importance for Mixed Data

Despite the popularity of feature importance measures in interpretable m...
09/02/2021

Inferring feature importance with uncertainties in high-dimensional data

Estimating feature importance is a significant aspect of explaining data...
06/15/2021

Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT)

Global model-agnostic feature importance measures either quantify whethe...