Signature Activation: A Sparse Signal View for Holistic Saliency

The adoption of machine learning in healthcare calls for model transparency and explainability. In this work, we introduce Signature Activation, a saliency method that generates holistic and class-agnostic explanations for Convolutional Neural Network (CNN) outputs. Our method exploits the fact that certain kinds of medical images, such as angiograms, have clear foreground and background objects. We give theoretical explanation to justify our methods. We show the potential use of our method in clinical settings through evaluating its efficacy for aiding the detection of lesions in coronary angiograms.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

page 8

research
06/24/2021

Evaluation of Saliency-based Explainability Method

A particular class of Explainable AI (XAI) methods provide saliency maps...
research
07/12/2021

Quantifying Explainability in NLP and Analyzing Algorithms for Performance-Explainability Tradeoff

The healthcare domain is one of the most exciting application areas for ...
research
06/04/2023

Sanity Checks for Saliency Methods Explaining Object Detectors

Saliency methods are frequently used to explain Deep Neural Network-base...
research
03/01/2023

SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective

Researchers have proposed various methods for visually interpreting the ...
research
04/11/2021

Enhancing Deep Neural Network Saliency Visualizations with Gradual Extrapolation

We propose an enhancement technique of the Class Activation Mapping meth...
research
11/15/2022

Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods

A popular approach to unveiling the black box of neural NLP models is to...

Please sign up or login with your details

Forgot password? Click here to reset