Gradient Hedging for Intensively Exploring Salient Interpretation beyond Neuron Activation

05/23/2022
by   Woo-Jeoung Nam, et al.
0

Hedging is a strategy for reducing the potential risks in various types of investments by adopting an opposite position in a related asset. Motivated by the equity technique, we introduce a method for decomposing output predictions into intensive salient attributions by hedging the evidence for a decision. We analyze the conventional approach applied to the evidence for a decision and discuss the paradox of the conservation rule. Subsequently, we define the viewpoint of evidence as a gap of positive and negative influence among the gradient-derived initial contribution maps and propagate the antagonistic elements to the evidence as suppressors, following the criterion of the degree of positive attribution defined by user preference. In addition, we reflect the severance or sparseness contribution of inactivated neurons, which are mostly irrelevant to a decision, resulting in increased robustness to interpretability. We conduct the following assessments in a verified experimental environment: pointing game, most relevant first region insertion, outside-inside relevance ratio, and mean average precision on the PASCAL VOC 2007, MS COCO 2014, and ImageNet datasets. The results demonstrate that our method outperforms existing attribution methods in distinctive, intensive, and intuitive visualization with robustness and applicability in general models.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 8

page 10

page 11

page 12

research
12/07/2020

Interpreting Deep Neural Networks with Relative Sectional Propagation by Analyzing Comparative Gradients and Hostile Activations

The clear transparency of Deep Neural Networks (DNNs) is hampered by com...
research
01/17/2021

Generating Attribution Maps with Disentangled Masked Backpropagation

Attribution map visualization has arisen as one of the most effective te...
research
04/01/2019

Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks

As Deep Neural Networks (DNNs) have demonstrated superhuman performance ...
research
07/08/2022

SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance

The leap in performance in state-of-the-art computer vision methods is a...
research
04/04/2019

Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations

Deep learning is increasingly used in decision-making tasks. However, un...
research
10/14/2020

FAR: A General Framework for Attributional Robustness

Attribution maps have gained popularity as tools for explaining neural n...
research
03/27/2019

On Attribution of Recurrent Neural Network Predictions via Additive Decomposition

RNN models have achieved the state-of-the-art performance in a wide rang...

Please sign up or login with your details

Forgot password? Click here to reset