Evaluation of importance estimators in deep learning classifiers for Computed Tomography

09/30/2022
by   Lennart Brocki, et al.
0

Deep learning has shown superb performance in detecting objects and classifying images, ensuring a great promise for analyzing medical imaging. Translating the success of deep learning to medical imaging, in which doctors need to understand the underlying process, requires the capability to interpret and explain the prediction of neural networks. Interpretability of deep neural networks often relies on estimating the importance of input features (e.g., pixels) with respect to the outcome (e.g., class probability). However, a number of importance estimators (also known as saliency maps) have been developed and it is unclear which ones are more relevant for medical imaging applications. In the present work, we investigated the performance of several importance estimators in explaining the classification of computed tomography (CT) images by a convolutional deep network, using three distinct evaluation metrics. First, the model-centric fidelity measures a decrease in the model accuracy when certain inputs are perturbed. Second, concordance between importance scores and the expert-defined segmentation masks is measured on a pixel level by a receiver operating characteristic (ROC) curves. Third, we measure a region-wise overlap between a XRAI-based map and the segmentation mask by Dice Similarity Coefficients (DSC). Overall, two versions of SmoothGrad topped the fidelity and ROC rankings, whereas both Integrated Gradients and SmoothGrad excelled in DSC evaluation. Interestingly, there was a critical discrepancy between model-centric (fidelity) and human-centric (ROC and DSC) evaluation. Expert expectation and intuition embedded in segmentation maps does not necessarily align with how the model arrived at its prediction. Understanding this difference in interpretability would help harnessing the power of deep learning in medicine.

READ FULL TEXT

page 4

page 7

research
03/06/2022

Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks

The challenge of interpreting predictions from deep neural networks has ...
research
12/06/2022

Automated Segmentation of Computed Tomography Images with Submanifold Sparse Convolutional Networks

Quantitative cancer image analysis relies on the accurate delineation of...
research
08/06/2020

Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging

Saliency maps have become a widely used method to make deep learning mod...
research
12/03/2020

Explaining Predictions of Deep Neural Classifier via Activation Analysis

In many practical applications, deep neural networks have been typically...
research
09/11/2017

NiftyNet: a deep-learning platform for medical imaging

Medical image analysis and computer-assisted intervention problems are i...
research
06/28/2018

Evaluating Feature Importance Estimates

Estimating the influence of a given feature to a model prediction is cha...
research
06/24/2023

Utilizing Segment Anything Model For Assessing Localization of GRAD-CAM in Medical Imaging

The introduction of saliency map algorithms as an approach for assessing...

Please sign up or login with your details

Forgot password? Click here to reset