Assisting clinical practice with fuzzy probabilistic decision trees

04/16/2023
by   Emma L. Ambag, et al.
0

The need for fully human-understandable models is increasingly being recognised as a central theme in AI research. The acceptance of AI models to assist in decision making in sensitive domains will grow when these models are interpretable, and this trend towards interpretable models will be amplified by upcoming regulations. One of the killer applications of interpretable AI is medical practice, which can benefit from accurate decision support methodologies that inherently generate trust. In this work, we propose FPT, (MedFP), a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice. This approach is fully interpretable as it allows clinicians to generate, control and verify the entire diagnosis procedure; one of the methodology's strength is the capability to decrease the frequency of misdiagnoses by providing an estimate of uncertainties and counterfactuals. Our approach is applied as a proof-of-concept to two real medical scenarios: classifying malignant thyroid nodules and predicting the risk of progression in chronic kidney disease patients. Our results show that probabilistic fuzzy decision trees can provide interpretable support to clinicians, furthermore, introducing fuzzy variables into the probabilistic model brings significant nuances that are lost when using the crisp thresholds set by traditional probabilistic decision trees. We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a user-friendly interface specifically designed for this purpose. Moreover, we discuss the interpretability of the FPT model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2022

POETREE: Interpretable Policy Learning with Adaptive Decision Trees

Building models of human decision-making from observed behaviour is crit...
research
04/15/2020

Interpretable Probabilistic Password Strength Meters via Deep Learning

Probabilistic password strength meters have been proved to be the most a...
research
10/16/2022

This Patient Looks Like That Patient: Prototypical Networks for Interpretable Diagnosis Prediction from Clinical Text

The use of deep neural models for diagnosis prediction from clinical tex...
research
08/18/2023

Causal Interpretable Progression Trajectory Analysis of Chronic Disease

Chronic disease is the leading cause of death, emphasizing the need for ...
research
11/27/2018

What is Interpretable? Using Machine Learning to Design Interpretable Decision-Support Systems

Recent efforts in Machine Learning (ML) interpretability have focused on...
research
08/17/2022

Quality Diversity Evolutionary Learning of Decision Trees

Addressing the need for explainable Machine Learning has emerged as one ...
research
08/29/2023

Probabilistic Dataset Reconstruction from Interpretable Models

Interpretability is often pointed out as a key requirement for trustwort...

Please sign up or login with your details

Forgot password? Click here to reset