Interpretable Machine Learning Classifiers for Brain Tumour Survival Prediction

06/17/2021
by   Colleen E. Charlton, et al.
0

Prediction of survival in patients diagnosed with a brain tumour is challenging because of heterogeneous tumour behaviours and responses to treatment. Better estimations of prognosis would support treatment planning and patient support. Advances in machine learning have informed development of clinical predictive models, but their integration into clinical practice is almost non-existent. One reasons for this is the lack of interpretability of models. In this paper, we use a novel brain tumour dataset to compare two interpretable rule list models against popular machine learning approaches for brain tumour survival prediction. All models are quantitatively evaluated using standard performance metrics. The rule lists are also qualitatively assessed for their interpretability and clinical utility. The interpretability of the black box machine learning models is evaluated using two post-hoc explanation techniques, LIME and SHAP. Our results show that the rule lists were only slightly outperformed by the black box models. We demonstrate that rule list algorithms produced simple decision lists that align with clinical expertise. By comparison, post-hoc interpretability methods applied to black box models may produce unreliable explanations of local model predictions. Model interpretability is essential for understanding differences in predictive performance and for integration into clinical practice.

READ FULL TEXT
research
02/18/2019

Regularizing Black-box Models for Improved Interpretability

Most work on interpretability in machine learning has focused on designi...
research
04/08/2019

Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition

To obtain interpretable machine learning models, either interpretable mo...
research
10/26/2018

MCA-based Rule Mining Enables Interpretable Inference in Clinical Psychiatry

Development of interpretable machine learning models for clinical health...
research
11/05/2015

Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model

We aim to produce predictive models that are not only accurate, but are ...
research
06/15/2021

Improving the compromise between accuracy, interpretability and personalization of rule-based machine learning in medical problems

One of the key challenges when developing a predictive model is the capa...
research
04/04/2019

A Categorisation of Post-hoc Explanations for Predictive Models

The ubiquity of machine learning based predictive models in modern socie...
research
06/08/2020

A Semiparametric Approach to Interpretable Machine Learning

Black box models in machine learning have demonstrated excellent predict...

Please sign up or login with your details

Forgot password? Click here to reset