SIBILA: High-performance computing and interpretable machine learning join efforts toward personalised medicine in a novel decision-making tool

Background and Objectives: Personalised medicine remains a major challenge for scientists. The rapid growth of Machine learning and Deep learning has made it a feasible alternative for predicting the most appropriate therapy for individual patients. However, the lack of interpretation of their results and high computational requirements make many reluctant to use these methods. Methods: Several Machine learning and Deep learning models have been implemented into a single software tool, SIBILA. Once the models are trained, SIBILA applies a range of interpretability methods to identify the input features that each model considered the most important to predict. In addition, all the features obtained are put in common to estimate the global attribution of each variable to the predictions. To facilitate its use by non-experts, SIBILA is also available to all users free of charge as a web server at https://bio-hpc.ucam.edu/sibila/. Results: SIBILA has been applied to three case studies to show its accuracy and efficiency in classification and regression problems. The first two cases proved that SIBILA can make accurate predictions even on uncleaned datasets. The last case demonstrates that SIBILA can be applied to medical contexts with real data. Conclusion: With the aim of becoming a powerful decision-making tool for clinicians, SIBILA has been developed. SIBILA is a novel software tool that leverages interpretable machine learning to make accurate predictions and explain how models made those decisions. SIBILA can be run on high-performance computing platforms, drastically reducing computing times.

READ FULL TEXT

page 7

page 8

page 9

page 10

page 11

page 13

research
04/05/2023

Physics-Inspired Interpretability Of Machine Learning Models

The ability to explain decisions made by machine learning models remains...
research
03/26/2015

Interpretable Classification Models for Recidivism Prediction

We investigate a long-debated question, which is how to create predictiv...
research
04/16/2021

Faithful and Plausible Explanations of Medical Code Predictions

Machine learning models that offer excellent predictive performance ofte...
research
04/27/2020

Interpretable Multi-Task Deep Neural Networks for Dynamic Predictions of Postoperative Complications

Accurate prediction of postoperative complications can inform shared dec...
research
01/24/2019

Learning Interpretable Models with Causal Guarantees

Machine learning has shown much promise in helping improve the quality o...
research
04/16/2020

CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation

Evaluation beyond aggregate performance metrics, e.g. F1-score, is cruci...
research
10/01/2020

When will the mist clear? On the Interpretability of Machine Learning for Medical Applications: a survey

Artificial Intelligence is providing astonishing results, with medicine ...

Please sign up or login with your details

Forgot password? Click here to reset