Interpretable Alzheimer's Disease Classification Via a Contrastive Diffusion Autoencoder

06/05/2023
by   Ayodeji Ijishakin, et al.
0

In visual object classification, humans often justify their choices by comparing objects to prototypical examples within that class. We may therefore increase the interpretability of deep learning models by imbuing them with a similar style of reasoning. In this work, we apply this principle by classifying Alzheimer's Disease based on the similarity of images to training examples within the latent space. We use a contrastive loss combined with a diffusion autoencoder backbone, to produce a semantically meaningful latent space, such that neighbouring latents have similar image-level features. We achieve a classification accuracy comparable to black box approaches on a dataset of 2D MRI images, whilst producing human interpretable model explanations. Therefore, this work stands as a contribution to the pertinent development of accurate and interpretable deep learning within medical imaging.

READ FULL TEXT

page 3

page 5

page 6

page 10

page 11

page 12

page 13

page 14

research
06/14/2019

Global and Local Interpretability for Cardiac MRI Classification

Deep learning methods for classifying medical images have demonstrated i...
research
11/14/2021

Interpretable ECG classification via a query-based latent space traversal (qLST)

Electrocardiography (ECG) is an effective and non-invasive diagnostic to...
research
10/13/2017

Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions

Deep neural networks are widely used for classification. These deep mode...
research
12/21/2017

A Deep Learning Interpretable Classifier for Diabetic Retinopathy Disease Grading

Deep neural network models have been proven to be very successful in ima...
research
01/19/2021

Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images

As AI-based medical devices are becoming more common in imaging fields l...
research
06/25/2019

Interpretable Image Recognition with Hierarchical Prototypes

Vision models are interpretable when they classify objects on the basis ...

Please sign up or login with your details

Forgot password? Click here to reset