Global Explanations of Neural Networks: Mapping the Landscape of Predictions

02/06/2019
by   Mark Ibrahim, et al.
0

A barrier to the wider adoption of neural networks is their lack of interpretability. While local explanation methods exist for one prediction, most global attributions still reduce neural network decisions to a single set of features. In response, we present an approach for generating global attributions called GAM, which explains the landscape of neural network predictions across subpopulations. GAM augments global explanations with the proportion of samples that each attribution best explains and specifies which samples are described by each attribution. Global explanations also have tunable granularity to detect more or fewer subpopulations. We demonstrate that GAM's global explanations 1) yield the known feature importances of simulated data, 2) match feature weights of interpretable statistical models on real data, and 3) are intuitive to practitioners through user studies. With more transparent predictions, GAM can help ensure neural network decisions are generated for the right reasons.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2020

How does this interaction affect me? Interpretable attribution for feature interactions

Machine learning transparency calls for interpretable explanations of ho...
research
09/28/2021

Discriminative Attribution from Counterfactuals

We present a method for neural network interpretability by combining fea...
research
09/27/2021

Time Series Model Attribution Visualizations as Explanations

Attributions are a common local explanation technique for deep learning ...
research
11/27/2022

Latent SHAP: Toward Practical Human-Interpretable Explanations

Model agnostic feature attribution algorithms (such as SHAP and LIME) ar...
research
12/03/2018

Sensitivity based Neural Networks Explanations

Although neural networks can achieve very high predictive performance on...
research
11/02/2022

XAI-Increment: A Novel Approach Leveraging LIME Explanations for Improved Incremental Learning

Explainability of neural network prediction is essential to understand f...
research
04/04/2019

Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations

Deep learning is increasingly used in decision-making tasks. However, un...

Please sign up or login with your details

Forgot password? Click here to reset