Pattern-Guided Integrated Gradients

07/21/2020
by   Robert Schwarzenberg, et al.
0

Integrated Gradients (IG) and PatternAttribution (PA) are two established explainability methods for neural networks. Both methods are theoretically well-founded. However, they were designed to overcome different challenges. In this work, we combine the two methods into a new method, Pattern-Guided Integrated Gradients (PGIG). PGIG inherits important properties from both parent methods and passes stress tests that the originals fail. In addition, we benchmark PGIG against nine alternative explainability approaches (including its parent methods) in a large-scale image degradation experiment and find that it outperforms all of them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2022

Maximum Entropy Baseline for Integrated Gradients

Integrated Gradients (IG), one of the most popular explainability method...
research
06/17/2021

Guided Integrated Gradients: An Adaptive Path Method for Removing Noise

Integrated Gradients (IG) is a commonly used feature attribution method ...
research
06/13/2022

Geometrically Guided Integrated Gradients

Interpretability methods for deep neural networks mainly focus on the se...
research
09/04/2019

Generalized Integrated Gradients: A practical method for explaining diverse ensembles

We introduce Generalized Integrated Gradients (GIG), a formal extension ...
research
11/21/2018

Compensated Integrated Gradients to Reliably Interpret EEG Classification

Integrated gradients are widely employed to evaluate the contribution of...
research
11/12/2018

Depth Image Upsampling based on Guided Filter with Low Gradient Minimization

In this paper, we present a novel upsampling framework to enhance the sp...
research
07/26/2018

Computationally Efficient Measures of Internal Neuron Importance

The challenge of assigning importance to individual neurons in a network...

Please sign up or login with your details

Forgot password? Click here to reset