INSightR-Net: Interpretable Neural Network for Regression using Similarity-based Comparisons to Prototypical Examples

07/31/2022
by   Linde S. Hesse, et al.
0

Convolutional neural networks (CNNs) have shown exceptional performance for a range of medical imaging tasks. However, conventional CNNs are not able to explain their reasoning process, therefore limiting their adoption in clinical practice. In this work, we propose an inherently interpretable CNN for regression using similarity-based comparisons (INSightR-Net) and demonstrate our methods on the task of diabetic retinopathy grading. A prototype layer incorporated into the architecture enables visualization of the areas in the image that are most similar to learned prototypes. The final prediction is then intuitively modeled as a mean of prototype labels, weighted by the similarities. We achieved competitive prediction performance with our INSightR-Net compared to a ResNet baseline, showing that it is not necessary to compromise performance for interpretability. Furthermore, we quantified the quality of our explanations using sparsity and diversity, two concepts considered important for a good explanation, and demonstrated the effect of several parameters on the latent space embeddings.

READ FULL TEXT
research
06/16/2023

Prototype Learning for Explainable Regression

The lack of explainability limits the adoption of deep learning models i...
research
05/31/2018

DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation

We propose DeepMiner, a framework to discover interpretable representati...
research
10/13/2017

Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions

Deep neural networks are widely used for classification. These deep mode...
research
07/19/2023

Interpreting and Correcting Medical Image Classification with PIP-Net

Part-prototype models are explainable-by-design image classifiers, and a...
research
12/07/2022

Learning to Select Prototypical Parts for Interpretable Sequential Data Modeling

Prototype-based interpretability methods provide intuitive explanations ...
research
11/11/2020

Deja vu from the SVM Era: Example-based Explanations with Outlier Detection

Understanding the features that contributed to a prediction is important...
research
01/19/2022

Cognitive Explainers of Graph Neural Networks Based on Medical Concepts

Although deep neural networks (DNN) have achieved state-of-the-art perfo...

Please sign up or login with your details

Forgot password? Click here to reset