Improving Interpretability for Computer-aided Diagnosis tools on Whole Slide Imaging with Multiple Instance Learning and Gradient-based Explanations

09/29/2020
by   Antoine Pirovano, et al.
0

Deep learning methods are widely used for medical applications to assist medical doctors in their daily routines. While performances reach expert's level, interpretability (highlight how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification. We formalize the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. We aim at explaining how the decision is made based on tile level scoring, how these tile scores are decided and which features are used and relevant for the task. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances by more than 29

READ FULL TEXT
research
05/10/2022

Explainable Deep Learning Methods in Medical Diagnosis: A Survey

The remarkable success of deep learning has prompted interest in its app...
research
02/25/2021

Do Input Gradients Highlight Discriminative Features?

Interpretability methods that seek to explain instance-specific model pr...
research
07/14/2020

Usefulness of interpretability methods to explain deep learning based plant stress phenotyping

Deep learning techniques have been successfully deployed for automating ...
research
06/23/2021

Gradient-Based Interpretability Methods and Binarized Neural Networks

Binarized Neural Networks (BNNs) have the potential to revolutionize the...
research
04/05/2019

Deep Learning Under the Microscope: Improving the Interpretability of Medical Imaging Neural Networks

In this paper, we propose a novel interpretation method tailored to hist...
research
09/18/2023

On Model Explanations with Transferable Neural Pathways

Neural pathways as model explanations consist of a sparse set of neurons...
research
07/16/2023

SHAMSUL: Simultaneous Heatmap-Analysis to investigate Medical Significance Utilizing Local interpretability methods

The interpretability of deep neural networks has become a subject of gre...

Please sign up or login with your details

Forgot password? Click here to reset