Locally Interpretable Predictions of Parkinson's Disease Progression

03/20/2020
by   Qiaomei Li, et al.
0

In precision medicine, machine learning techniques have been commonly proposed to aid physicians in early screening of chronic diseases such as Parkinson's Disease. These automated screening procedures should be interpretable by a clinician who must explain the decision-making process to patients for informed consent. However, the methods which typically achieve the highest level of accuracy given early screening data are complex black box models. In this paper, we provide a novel approach for explaining black box model predictions of Parkinson's Disease progression that can give high fidelity explanations with lower model complexity. Specifically, we use the Parkinson's Progression Marker Initiative (PPMI) data set to cluster patients based on the trajectory of their disease progression. This can be used to predict how a patient's symptoms are likely to develop based on initial screening data. We then develop a black box (random forest) model for predicting which cluster a patient belongs in, along with a method for generating local explainers for these predictions. Our local explainer methodology uses a computationally efficient information filter to include only the most relevant features. We also develop a global explainer methodology and empirically validate its performance on the PPMI data set, showing that our approach may Pareto-dominate existing techniques on the trade-off between fidelity and coverage. Such tools should prove useful for implementing medical screening tools in practice by providing explainer models with high fidelity and significantly less functional complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2018

Please Stop Explaining Black Box Models for High Stakes Decisions

There are black box models now being used for high stakes decision-makin...
research
01/28/2022

Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning

Locally interpretable model agnostic explanations (LIME) method is one o...
research
10/30/2019

Precision disease networks (PDN)

This paper presents a method for building patient-based networks that we...
research
07/15/2020

VAE-LIME: Deep Generative Model Based Approach for Local Data-Driven Model Interpretability Applied to the Ironmaking Industry

Machine learning applied to generate data-driven models are lacking of t...
research
07/09/2021

Understanding surrogate explanations: the interplay between complexity, fidelity and coverage

This paper analyses the fundamental ingredients behind surrogate explana...
research
08/18/2023

Causal Interpretable Progression Trajectory Analysis of Chronic Disease

Chronic disease is the leading cause of death, emphasizing the need for ...
research
02/13/2023

Deep Learning Predicts Prevalent and Incident Parkinson's Disease From UK Biobank Fundus Imaging

Parkinson's disease is the world's fastest growing neurological disorder...

Please sign up or login with your details

Forgot password? Click here to reset