NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks

09/10/2019
by   Isaac Ahern, et al.
0

The problem of explaining deep learning models, and model predictions generally, has attracted intensive interest recently. Many successful approaches forgo global approximations in order to provide more faithful local interpretations of the model's behavior. LIME develops multiple interpretable models, each approximating a large neural network on a small region of the data manifold and SP-LIME aggregates the local models to form a global interpretation. Extending this line of research, we propose a simple yet effective method, NormLIME for aggregating local models into global and class-specific interpretations. A human user study strongly favored class-specific interpretations created by NormLIME to other feature importance metrics. Numerical experiments confirm that NormLIME is effective at recognizing important features.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2020

Explaining Local, Global, And Higher-Order Interactions In Deep Learning

We present a simple yet highly generalizable method for explaining inter...
research
10/15/2020

Altruist: Argumentative Explanations through Local Interpretations of Predictive Models

Interpretable machine learning is an emerging field providing solutions ...
research
08/26/2020

How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels

Explaining to users why automated systems make certain mistakes is impor...
research
08/13/2021

Finding Representative Interpretations on Convolutional Neural Networks

Interpreting the decision logic behind effective deep convolutional neur...
research
07/15/2022

Algorithms to estimate Shapley value feature attributions

Feature attributions based on the Shapley value are popular for explaini...
research
10/12/2017

Revisiting the Design Issues of Local Models for Japanese Predicate-Argument Structure Analysis

The research trend in Japanese predicate-argument structure (PAS) analys...
research
10/06/2021

Shapley variable importance clouds for interpretable machine learning

Interpretable machine learning has been focusing on explaining final mod...

Please sign up or login with your details

Forgot password? Click here to reset