Quantitative and Qualitative Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis

09/26/2020
by   Amitojdeep Singh, et al.
0

Background: The lack of explanations for the decisions made by algorithms such as deep learning has hampered their acceptance by the clinical community despite highly accurate results on multiple problems. Recently, attribution methods have emerged for explaining deep learning models, and they have been tested on medical imaging problems. The performance of attribution methods is compared on standard machine learning datasets and not on medical images. In this study, we perform a comparative analysis to determine the most suitable explainability method for retinal OCT diagnosis. Methods: A commonly used deep learning model known as Inception v3 was trained to diagnose 3 retinal diseases - choroidal neovascularization (CNV), diabetic macular edema (DME), and drusen. The explanations from 13 different attribution methods were rated by a panel of 14 clinicians for clinical significance. Feedback was obtained from the clinicians regarding the current and future scope of such methods. Results: An attribution method based on a Taylor series expansion, called Deep Taylor was rated the highest by clinicians with a median rating of 3.85/5. It was followed by two other attribution methods, Guided backpropagation and SHAP (SHapley Additive exPlanations). Conclusion: Explanations of deep learning models can make them more transparent for clinical diagnosis. This study compared different explanations methods in the context of retinal OCT diagnosis and found that the best performing method may not be the one considered best for other deep learning tasks. Overall, there was a high degree of acceptance from the clinicians surveyed in the study. Keywords: explainable AI, deep learning, machine learning, image processing, Optical coherence tomography, retina, Diabetic macular edema, Choroidal Neovascularization, Drusen

READ FULL TEXT

page 1

page 6

page 7

page 8

page 9

page 11

research
01/26/2021

Uncertainty aware and explainable diagnosis of retinal disease

Deep learning methods for ophthalmic diagnosis have shown considerable s...
research
12/11/2020

Dependency Decomposition and a Reject Option for Explainable Models

Deploying machine learning models in safety-related do-mains (e.g. auton...
research
09/28/2021

Who Explains the Explanation? Quantitatively Assessing Feature Attribution Methods

AI explainability seeks to increase the transparency of models, making t...
research
12/06/2022

Achieving Transparency in Distributed Machine Learning with Explainable Data Collaboration

Transparency of Machine Learning models used for decision support in var...
research
11/06/2021

Demystifying Deep Learning Models for Retinal OCT Disease Classification using Explainable AI

In the world of medical diagnostics, the adoption of various deep learni...
research
06/27/2023

An Empirical Evaluation of the Rashomon Effect in Explainable Machine Learning

The Rashomon Effect describes the following phenomenon: for a given data...
research
08/23/2019

A comparative study for interpreting deep learning prediction of the Parkinson's disease diagnosis from SPECT imaging

The application of deep learning to single-photon emission computed tomo...

Please sign up or login with your details

Forgot password? Click here to reset