Scalable, Axiomatic Explanations of Deep Alzheimer's Diagnosis from Heterogeneous Data

07/13/2021
by   Sebastian Pölsterl, et al.
20

Deep Neural Networks (DNNs) have an enormous potential to learn from complex biomedical data. In particular, DNNs have been used to seamlessly fuse heterogeneous information from neuroanatomy, genetics, biomarkers, and neuropsychological tests for highly accurate Alzheimer's disease diagnosis. On the other hand, their black-box nature is still a barrier for the adoption of such a system in the clinic, where interpretability is absolutely essential. We propose Shapley Value Explanation of Heterogeneous Neural Networks (SVEHNN) for explaining the Alzheimer's diagnosis made by a DNN from the 3D point cloud of the neuroanatomy and tabular biomarkers. Our explanations are based on the Shapley value, which is the unique method that satisfies all fundamental axioms for local explanations previously established in the literature. Thus, SVEHNN has many desirable characteristics that previous work on interpretability for medical decision making is lacking. To avoid the exponential time complexity of the Shapley value, we propose to transform a given DNN into a Lightweight Probabilistic Deep Network without re-training, thus achieving a complexity only quadratic in the number of features. In our experiments on synthetic and real data, we show that we can closely approximate the exact Shapley value with a dramatically reduced runtime and can reveal the hidden knowledge the network has learned from the data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/06/2019

Explaining Deep Neural Networks Using Spectrum-Based Fault Localization

Deep neural networks (DNNs) increasingly replace traditionally developed...
research
03/13/2023

Don't PANIC: Prototypical Additive Neural Network for Interpretable Classification of Alzheimer's Disease

Alzheimer's disease (AD) has a complex and multifactorial etiology, whic...
research
03/01/2022

Explaining a Deep Reinforcement Learning Docking Agent Using Linear Model Trees with User Adapted Visualization

Deep neural networks (DNNs) can be useful within the marine robotics fie...
research
07/19/2021

Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units

As interpretability has been pointed out as the obstacle to the adoption...
research
08/23/2021

Explaining Bayesian Neural Networks

To make advanced learning machines such as Deep Neural Networks (DNNs) m...
research
02/05/2021

Convolutional Neural Network Interpretability with General Pattern Theory

Ongoing efforts to understand deep neural networks (DNN) have provided m...
research
08/12/2021

Alzheimer's Disease Diagnosis via Deep Factorization Machine Models

The current state-of-the-art deep neural networks (DNNs) for Alzheimer's...

Please sign up or login with your details

Forgot password? Click here to reset