Generating Attribution Maps with Disentangled Masked Backpropagation

01/17/2021
by   Adria Ruiz, et al.
2

Attribution map visualization has arisen as one of the most effective techniques to understand the underlying inference process of Convolutional Neural Networks. In this task, the goal is to compute an score for each image pixel related with its contribution to the final network output. In this paper, we introduce Disentangled Masked Backpropagation (DMBP), a novel gradient-based method that leverages on the piecewise linear nature of ReLU networks to decompose the model function into different linear mappings. This decomposition aims to disentangle the positive, negative and nuisance factors from the attribution maps by learning a set of variables masking the contribution of each filter during back-propagation. A thorough evaluation over standard architectures (ResNet50 and VGG16) and benchmark datasets (PASCAL VOC and ImageNet) demonstrates that DMBP generates more visually interpretable attribution maps than previous approaches. Additionally, we quantitatively show that the maps produced by our method are more consistent with the true contribution of each pixel to the final network output.

READ FULL TEXT

page 13

page 14

page 15

page 16

page 17

page 18

page 19

page 20

research
10/14/2020

Learning Propagation Rules for Attribution Map Generation

Prior gradient-based attribution-map methods rely on handcrafted propaga...
research
09/19/2019

Testing the robustness of attribution methods for convolutional neural networks in MRI-based Alzheimer's disease classification

Attribution methods are an easy to use tool for investigating and valida...
research
05/23/2022

Gradient Hedging for Intensively Exploring Salient Interpretation beyond Neuron Activation

Hedging is a strategy for reducing the potential risks in various types ...
research
12/07/2020

Interpreting Deep Neural Networks with Relative Sectional Propagation by Analyzing Comparative Gradients and Hostile Activations

The clear transparency of Deep Neural Networks (DNNs) is hampered by com...
research
10/06/2020

IS-CAM: Integrated Score-CAM for axiomatic-based explanations

Convolutional Neural Networks have been known as black-box models as hum...
research
06/15/2023

Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings

Explainable AI aims to render model behavior understandable by humans, w...
research
10/28/2020

Attribution Preservation in Network Compression for Reliable Network Interpretation

Neural networks embedded in safety-sensitive applications such as self-d...

Please sign up or login with your details

Forgot password? Click here to reset