Gradients of Counterfactuals

11/08/2016
by   Mukund Sundararajan, et al.
0

Gradients have been used to quantify feature importance in machine learning models. Unfortunately, in nonlinear deep networks, not only individual neurons but also the whole network can saturate, and as a result an important input feature can have a tiny gradient. We study various networks, and observe that this phenomena is indeed widespread, across many inputs. We propose to examine interior gradients, which are gradients of counterfactual inputs constructed by scaling down the original input. We apply our method to the GoogleNet architecture for object recognition in images, as well as a ligand-based virtual screening network with categorical features and an LSTM based language model for the Penn Treebank dataset. We visualize how interior gradients better capture feature importance. Furthermore, interior gradients are applicable to a wide variety of deep networks, and have the attribution property that the feature importance scores sum to the the prediction score. Best of all, interior gradients can be computed just as easily as gradients. In contrast, previous methods are complex to implement, which hinders practical adoption.

READ FULL TEXT

page 2

page 5

page 10

page 11

page 17

research
04/03/2020

Attribution in Scale and Space

We study the attribution problem [28] for deep networks applied to perce...
research
02/10/2020

Explaining Explanations: Axiomatic Feature Interactions for Deep Networks

Recent work has shown great promise in explaining neural network behavio...
research
04/22/2022

Locally Aggregated Feature Attribution on Natural Language Model Understanding

With the growing popularity of deep-learning models, model understanding...
research
12/06/2020

Representaciones del aprendizaje reutilizando los gradientes de la retropropagacion

This work proposes an algorithm for taking advantage of backpropagation ...
research
07/06/2023

Generalizing Backpropagation for Gradient-Based Interpretability

Many popular feature-attribution methods for interpreting deep neural ne...
research
07/26/2018

Computationally Efficient Measures of Internal Neuron Importance

The challenge of assigning importance to individual neurons in a network...
research
02/15/2023

On the Detection and Quantification of Nonlinearity via Statistics of the Gradients of a Black-Box Model

Detection and identification of nonlinearity is a task of high importanc...

Please sign up or login with your details

Forgot password? Click here to reset