A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values

06/11/2018
by   Mukund Sundararajan, et al.
0

Local explanation methods, also known as attribution methods, attribute a deep network's prediction to its input (cf. Baehrens et al. (2010)). We respond to the claim from Adebayo et al. (2018) that local explanation methods lack sensitivity, i.e., DNNs with randomly-initialized weights produce explanations that are both visually and quantitatively similar to those produced by DNNs with learned weights. Further investigation reveals that their findings are due to two choices in their analysis: (a) ignoring the signs of the attributions; and (b) for integrated gradients (IG), including pixels in their analysis that have zero attributions by choice of the baseline (an auxiliary input relative to which the attributions are computed). When both factors are accounted for, IG attributions for a random network and the actual network are uncorrelated. Our investigation also sheds light on how these issues affect visualizations, although we note that more work is needed to understand how viewers interpret the difference between the random and the actual attributions.

READ FULL TEXT
research
10/08/2018

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

Explaining the output of a complicated machine learning model like a dee...
research
03/20/2021

Boundary Attributions Provide Normal (Vector) Explanations

Recent work on explaining Deep Neural Networks (DNNs) focuses on attribu...
research
04/03/2020

Attribution in Scale and Space

We study the attribution problem [28] for deep networks applied to perce...
research
06/21/2023

Evaluating the overall sensitivity of saliency-based explanation methods

We address the need to generate faithful explanations of "black box" Dee...
research
07/08/2020

An exploration of the influence of path choice in game-theoretic attribution algorithms

We compare machine learning explainability methods based on the theory o...
research
08/12/2022

Comparing Baseline Shapley and Integrated Gradients for Local Explanation: Some Additional Insights

There are many different methods in the literature for local explanation...
research
12/16/2019

Are the results of the groundwater model robust?

De Graaf et al. (2019) suggest that groundwater pumping will bring 42–79...

Please sign up or login with your details

Forgot password? Click here to reset