A light-weight method to foster the (Grad)CAM interpretability and explainability of classification networks

09/26/2020
by   Alfred Schöttl, et al.
0

We consider a light-weight method which allows to improve the explainability of localized classification networks. The method considers (Grad)CAM maps during the training process by modification of the training loss and does not require additional structural elements. It is demonstrated that the (Grad)CAM interpretability, as measured by several indicators, can be improved in this way. Since the method shall be applicable on embedded systems and on standard deeper architectures, it essentially takes advantage of second order derivatives during the training and does not require additional model layers.

READ FULL TEXT

page 3

page 4

research
04/17/2019

Explainability in Human-Agent Systems

This paper presents a taxonomy of explainability in Human-Agent Systems....
research
04/30/2023

On APN functions and their derivatives

We determine a connection between the weight of a Boolean function and t...
research
06/14/2023

Explaining Explainability: Towards Deeper Actionable Insights into Deep Learning through Second-order Explainability

Explainability plays a crucial role in providing a more comprehensive un...
research
09/07/2021

Trojan Signatures in DNN Weights

Deep neural networks have been shown to be vulnerable to backdoor, or tr...
research
05/29/2023

Explainability in Simplicial Map Neural Networks

Simplicial map neural networks (SMNNs) are topology-based neural network...
research
04/20/2023

Adaptive Consensus Optimization Method for GANs

We propose a second order gradient based method with ADAM and RMSprop fo...
research
08/25/2020

Temporal Action Localization with Variance-Aware Networks

This work addresses the problem of temporal action localization with Var...

Please sign up or login with your details

Forgot password? Click here to reset