DeepAI AI Chat
Log In Sign Up

Local Interpretability of Calibrated Prediction Models: A Case of Type 2 Diabetes Mellitus Screening Test

by   Simon Kocbek, et al.

Machine Learning (ML) models are often complex and difficult to interpret due to their 'black-box' characteristics. Interpretability of a ML model is usually defined as the degree to which a human can understand the cause of decisions reached by a ML model. Interpretability is of extremely high importance in many fields of healthcare due to high levels of risk related to decisions based on ML models. Calibration of the ML model outputs is another issue often overlooked in the application of ML models in practice. This paper represents an early work in examination of prediction model calibration impact on the interpretability of the results. We present a use case of a patient in diabetes screening prediction scenario and visualize results using three different techniques to demonstrate the differences between calibrated and uncalibrated regularized regression model.


Interpretability of machine learning based prediction models in healthcare

There is a need of ensuring machine learning models that are interpretab...

ML Interpretability: Simple Isn't Easy

The interpretability of ML models is important, but it is not clear what...

Monitoring machine learning (ML)-based risk prediction algorithms in the presence of confounding medical interventions

Monitoring the performance of machine learning (ML)-based risk predictio...

Understanding Interpretability by generalized distillation in Supervised Classification

The ability to interpret decisions taken by Machine Learning (ML) models...

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

Machine learning (ML) interpretability techniques can reveal undesirable...

Spatial machine-learning model diagnostics: a model-agnostic distance-based approach

While significant progress has been made towards explaining black-box ma...