Interpreting Deep Neural Networks with Relative Sectional Propagation by Analyzing Comparative Gradients and Hostile Activations

12/07/2020
by   Woo-Jeoung Nam, et al.
0

The clear transparency of Deep Neural Networks (DNNs) is hampered by complex internal structures and nonlinear transformations along deep hierarchies. In this paper, we propose a new attribution method, Relative Sectional Propagation (RSP), for fully decomposing the output predictions with the characteristics of class-discriminative attributions and clear objectness. We carefully revisit some shortcomings of backpropagation-based attribution methods, which are trade-off relations in decomposing DNNs. We define hostile factor as an element that interferes with finding the attributions of the target and propagate it in a distinguishable way to overcome the non-suppressed nature of activated neurons. As a result, it is possible to assign the bi-polar relevance scores of the target (positive) and hostile (negative) attributions while maintaining each attribution aligned with the importance. We also present the purging techniques to prevent the decrement of the gap between the relevance scores of the target and hostile attributions during backward propagation by eliminating the conflicting units to channel attribution map. Therefore, our method makes it possible to decompose the predictions of DNNs with clearer class-discriminativeness and detailed elucidations of activation neurons compared to the conventional attribution methods. In a verified experimental environment, we report the results of the assessments: (i) Pointing Game, (ii) mIoU, and (iii) Model Sensitivity with PASCAL VOC 2007, MS COCO 2014, and ImageNet datasets. The results demonstrate that our method outperforms existing backward decomposition methods, including distinctive and intuitive visualizations.

READ FULL TEXT

page 4

page 5

page 7

04/01/2019

Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks

As Deep Neural Networks (DNNs) have demonstrated superhuman performance ...
05/23/2022

Gradient Hedging for Intensively Exploring Salient Interpretation beyond Neuron Activation

Hedging is a strategy for reducing the potential risks in various types ...
11/15/2021

Fast Axiomatic Attribution for Neural Networks

Mitigating the dependence on spurious correlations present in the traini...
10/14/2020

Learning Propagation Rules for Attribution Map Generation

Prior gradient-based attribution-map methods rely on handcrafted propaga...
04/14/2021

Mutual Information Preserving Back-propagation: Learn to Invert for Faithful Attribution

Back propagation based visualizations have been proposed to interpret de...
07/19/2021

Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units

As interpretability has been pointed out as the obstacle to the adoption...
03/20/2021

Boundary Attributions Provide Normal (Vector) Explanations

Recent work on explaining Deep Neural Networks (DNNs) focuses on attribu...