Evaluating Post-hoc Interpretability with Intrinsic Interpretability

05/04/2023
by   José Pereira Amorim, et al.
0

Despite Convolutional Neural Networks having reached human-level performance in some medical tasks, their clinical use has been hindered by their lack of interpretability. Two major interpretability strategies have been proposed to tackle this problem: post-hoc methods and intrinsic methods. Although there are several post-hoc methods to interpret DL models, there is significant variation between the explanations provided by each method, and it a difficult to validate them due to the lack of ground-truth. To address this challenge, we adapted the intrinsical interpretable ProtoPNet for the context of histopathology imaging and compared the attribution maps produced by it and the saliency maps made by post-hoc methods. To evaluate the similarity between saliency map methods and attribution maps we adapted 10 saliency metrics from the saliency model literature, and used the breast cancer metastases detection dataset PatchCamelyon with 327,680 patches of histopathological images of sentinel lymph node sections to validate the proposed approach. Overall, SmoothGrad and Occlusion were found to have a statistically bigger overlap with ProtoPNet while Deconvolution and Lime have been found to have the least.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2019

Sanity Checks for Saliency Metrics

Saliency maps are a popular approach to creating post-hoc explanations o...
research
01/17/2023

Opti-CAM: Optimizing saliency maps for interpretability

Methods based on class activation maps (CAM) provide a simple mechanism ...
research
10/27/2021

Revisiting Sanity Checks for Saliency Maps

Saliency methods are a popular approach for model debugging and explaina...
research
06/23/2021

Gradient-Based Interpretability Methods and Binarized Neural Networks

Binarized Neural Networks (BNNs) have the potential to revolutionize the...
research
12/03/2020

Interpretable Graph Capsule Networks for Object Recognition

Capsule Networks, as alternatives to Convolutional Neural Networks, have...
research
11/14/2022

Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be Effective for Detecting Unknown Spurious Correlations

Recent work has suggested post-hoc explainers might be ineffective for d...
research
01/21/2020

SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation

Medical image segmentation is a difficult but important task for many cl...

Please sign up or login with your details

Forgot password? Click here to reset