DeepAI AI Chat
Log In Sign Up

Local Interpretability of Calibrated Prediction Models: A Case of Type 2 Diabetes Mellitus Screening Test

06/02/2020
by   Simon Kocbek, et al.
0

Machine Learning (ML) models are often complex and difficult to interpret due to their 'black-box' characteristics. Interpretability of a ML model is usually defined as the degree to which a human can understand the cause of decisions reached by a ML model. Interpretability is of extremely high importance in many fields of healthcare due to high levels of risk related to decisions based on ML models. Calibration of the ML model outputs is another issue often overlooked in the application of ML models in practice. This paper represents an early work in examination of prediction model calibration impact on the interpretability of the results. We present a use case of a patient in diabetes screening prediction scenario and visualize results using three different techniques to demonstrate the differences between calibrated and uncalibrated regularized regression model.

READ FULL TEXT
02/20/2020

Interpretability of machine learning based prediction models in healthcare

There is a need of ensuring machine learning models that are interpretab...
11/24/2022

ML Interpretability: Simple Isn't Easy

The interpretability of ML models is important, but it is not clear what...
11/17/2022

Monitoring machine learning (ML)-based risk prediction algorithms in the presence of confounding medical interventions

Monitoring the performance of machine learning (ML)-based risk predictio...
12/05/2020

Understanding Interpretability by generalized distillation in Supervised Classification

The ability to interpret decisions taken by Machine Learning (ML) models...
06/30/2022

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

Machine learning (ML) interpretability techniques can reveal undesirable...
11/13/2021

Spatial machine-learning model diagnostics: a model-agnostic distance-based approach

While significant progress has been made towards explaining black-box ma...