Prototype Learning for Explainable Regression

06/16/2023
by   Linde S. Hesse, et al.
0

The lack of explainability limits the adoption of deep learning models in clinical practice. While methods exist to improve the understanding of such models, these are mainly saliency-based and developed for classification, despite many important tasks in medical imaging being continuous regression problems. Therefore, in this work, we present ExPeRT: an explainable prototype-based model specifically designed for regression tasks. Our proposed model makes a sample prediction from the distances to a set of learned prototypes in latent space, using a weighted mean of prototype labels. The distances in latent space are regularized to be relative to label differences, and each of the prototypes can be visualized as a sample from the training set. The image-level distances are further constructed from patch-level distances, in which the patches of both images are structurally matched using optimal transport. We demonstrate our proposed model on the task of brain age prediction on two image datasets: adult MR and fetal ultrasound. Our approach achieved state-of-the-art prediction performance while providing insight in the model's reasoning process.

READ FULL TEXT

page 1

page 5

page 7

page 8

research
07/31/2022

INSightR-Net: Interpretable Neural Network for Regression using Similarity-based Comparisons to Prototypical Examples

Convolutional neural networks (CNNs) have shown exceptional performance ...
research
01/11/2022

Entropic Optimal Transport in Random Graphs

In graph analysis, a classic task consists in computing similarity measu...
research
08/23/2023

Reframing the Brain Age Prediction Problem to a More Interpretable and Quantitative Approach

Deep learning models have achieved state-of-the-art results in estimatin...
research
06/30/2020

Image-level Harmonization of Multi-Site Data using Image-and-Spatial Transformer Networks

We investigate the use of image-and-spatial transformer networks (ISTNs)...
research
07/22/2022

TRUST-LAPSE: An Explainable Actionable Mistrust Scoring Framework for Model Monitoring

Continuous monitoring of trained ML models to determine when their predi...
research
10/13/2017

Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions

Deep neural networks are widely used for classification. These deep mode...
research
10/28/2022

Can Current Explainability Help Provide References in Clinical Notes to Support Humans Annotate Medical Codes?

The medical codes prediction problem from clinical notes has received su...

Please sign up or login with your details

Forgot password? Click here to reset