IS-CAM: Integrated Score-CAM for axiomatic-based explanations

10/06/2020
by   Rakshit Naidu, et al.
6

Convolutional Neural Networks have been known as black-box models as humans cannot interpret their inner functionalities. With an attempt to make CNNs more interpretable and trustworthy, we propose IS-CAM (Integrated Score-CAM), where we introduce the integration operation within the Score-CAM pipeline to achieve visually sharper attribution maps quantitatively. Our method is evaluated on 2000 randomly selected images from the ILSVRC 2012 Validation dataset, which proves the versatility of IS-CAM to account for different models and methods.

READ FULL TEXT
research
06/23/2023

Four Axiomatic Characterizations of the Integrated Gradients Attribution Method

Deep neural networks have produced significant progress among machine le...
research
11/27/2022

Latent SHAP: Toward Practical Human-Interpretable Explanations

Model agnostic feature attribution algorithms (such as SHAP and LIME) ar...
research
01/21/2023

Towards a Measure of Trustworthiness to Evaluate CNNs During Operation

Due to black box nature of Convolutional neural networks (CNNs), the con...
research
01/17/2021

Generating Attribution Maps with Disentangled Masked Backpropagation

Attribution map visualization has arisen as one of the most effective te...
research
10/26/2017

InterpNET: Neural Introspection for Interpretable Deep Learning

Humans are able to explain their reasoning. On the contrary, deep neural...
research
04/13/2022

OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors on LiDAR Data

While 3D object detection in LiDAR point clouds is well-established in a...
research
10/17/2019

Effect of Superpixel Aggregation on Explanations in LIME – A Case Study with Biological Data

End-to-end learning with deep neural networks, such as convolutional neu...

Please sign up or login with your details

Forgot password? Click here to reset