Identification and Visualization of the Underlying Independent Causes of the Diagnostic of Diabetic Retinopathy made by a Deep Learning Classifier

09/23/2018
by   Jordi de la Torre, et al.
4

Interpretability is a key factor in the design of automatic classifiers for medical diagnosis. Deep learning models have been proven to be a very effective classification algorithm when trained in a supervised way with enough data. The main concern is the difficulty of inferring rationale interpretations from them. Different attempts have been done in last years in order to convert deep learning classifiers from high confidence statistical black box machines into self-explanatory models. In this paper we go forward into the generation of explanations by identifying the independent causes that use a deep learning model for classifying an image into a certain class. We use a combination of Independent Component Analysis with a Score Visualization technique. In this paper we study the medical problem of classifying an eye fundus image into 5 levels of Diabetic Retinopathy. We conclude that only 3 independent components are enough for the differentiation and correct classification between the 5 disease standard classes. We propose a method for visualizing them and detecting lesions from the generated visual maps.

READ FULL TEXT

page 10

page 11

research
12/21/2017

A Deep Learning Interpretable Classifier for Diabetic Retinopathy Disease Grading

Deep neural network models have been proven to be very successful in ima...
research
06/16/2020

Visualization for Histopathology Images using Graph Convolutional Neural Networks

With the increase in the use of deep learning for computer-aided diagnos...
research
10/16/2021

TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models

Clinicians are often very sceptical about applying automatic image proce...
research
06/05/2023

Interpretable Alzheimer's Disease Classification Via a Contrastive Diffusion Autoencoder

In visual object classification, humans often justify their choices by c...
research
06/01/2018

Producing radiologist-quality reports for interpretable artificial intelligence

Current approaches to explaining the decisions of deep learning systems ...
research
12/17/2022

Two-sample test based on Self-Organizing Maps

Machine-learning classifiers can be leveraged as a two-sample statistica...
research
11/26/2017

An Introduction to Deep Visual Explanation

The practical impact of deep learning on complex supervised learning pro...

Please sign up or login with your details

Forgot password? Click here to reset