SIBILA: High-performance computing and interpretable machine learning join efforts toward personalised medicine in a novel decision-making tool

by   Antonio-Jesús Banegas-Luna, et al.

Background and Objectives: Personalised medicine remains a major challenge for scientists. The rapid growth of Machine learning and Deep learning has made it a feasible alternative for predicting the most appropriate therapy for individual patients. However, the lack of interpretation of their results and high computational requirements make many reluctant to use these methods. Methods: Several Machine learning and Deep learning models have been implemented into a single software tool, SIBILA. Once the models are trained, SIBILA applies a range of interpretability methods to identify the input features that each model considered the most important to predict. In addition, all the features obtained are put in common to estimate the global attribution of each variable to the predictions. To facilitate its use by non-experts, SIBILA is also available to all users free of charge as a web server at Results: SIBILA has been applied to three case studies to show its accuracy and efficiency in classification and regression problems. The first two cases proved that SIBILA can make accurate predictions even on uncleaned datasets. The last case demonstrates that SIBILA can be applied to medical contexts with real data. Conclusion: With the aim of becoming a powerful decision-making tool for clinicians, SIBILA has been developed. SIBILA is a novel software tool that leverages interpretable machine learning to make accurate predictions and explain how models made those decisions. SIBILA can be run on high-performance computing platforms, drastically reducing computing times.



There are no comments yet.


page 7

page 8

page 9

page 10

page 11

page 13


Interpretability of machine learning based prediction models in healthcare

There is a need of ensuring machine learning models that are interpretab...

Interpretable Classification Models for Recidivism Prediction

We investigate a long-debated question, which is how to create predictiv...

Faithful and Plausible Explanations of Medical Code Predictions

Machine learning models that offer excellent predictive performance ofte...

Interpretable Multi-Task Deep Neural Networks for Dynamic Predictions of Postoperative Complications

Accurate prediction of postoperative complications can inform shared dec...

Learning Interpretable Models with Causal Guarantees

Machine learning has shown much promise in helping improve the quality o...

CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation

Evaluation beyond aggregate performance metrics, e.g. F1-score, is cruci...

When will the mist clear? On the Interpretability of Machine Learning for Medical Applications: a survey

Artificial Intelligence is providing astonishing results, with medicine ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.