OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms

06/10/2020
by   Giorgio Visani, et al.
0

Local Interpretable Model-Agnostic Explanations (LIME) is a popular method to perform interpretability of any kind of Machine Learning (ML) model. It explains one ML prediction at a time, by learning a simple linear model around the prediction. The model is trained on randomly generated data points, sampled from the training dataset distribution and weighted according to the distance from the reference point - the one being explained by LIME. Feature selection is applied to keep only the most important variables. LIME is widespread across different domains, although its instability - a single prediction may obtain different explanations - is one of the major shortcomings. This is due to the randomness in the sampling step, as well as to the flexibility in tuning the weights and determines a lack of reliability in the retrieved explanations, making LIME adoption problematic. In Medicine especially, clinical professionals trust is mandatory to determine the acceptance of an explainable algorithm, considering the importance of the decisions at stake and the related legal issues. In this paper, we highlight a trade-off between explanation's stability and adherence, namely how much it resembles the ML model. Exploiting our innovative discovery, we propose a framework to maximise stability, while retaining a predefined level of adherence. OptiLIME provides freedom to choose the best adherence-stability trade-off level and more importantly, it clearly highlights the mathematical properties of the retrieved explanation. As a result, the practitioner is provided with tools to decide whether the explanation is reliable, according to the problem at hand. We extensively test OptiLIME on a toy dataset - to present visually the geometrical findings - and a medical dataset. In the latter, we show how the method comes up with meaningful explanations both from a medical and mathematical standpoint.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2019

DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems

Local Interpretable Model-Agnostic Explanations (LIME) is a popular tech...
research
11/10/2022

Does the explanation satisfy your needs?: A unified view of properties of explanations

Interpretability provides a means for humans to verify aspects of machin...
research
09/04/2019

ALIME: Autoencoder Based Approach for Local Interpretability

Machine learning and especially deep learning have garneredtremendous po...
research
06/26/2023

Challenges and Opportunities of Shapley values in a Clinical Context

With the adoption of machine learning-based solutions in routine clinica...
research
04/24/2022

An empirical study of the effect of background data size on the stability of SHapley Additive exPlanations (SHAP) for deep learning models

Nowadays, the interpretation of why a machine learning (ML) model makes ...
research
07/03/2023

Fighting the disagreement in Explainable Machine Learning with consensus

Machine learning (ML) models are often valued by the accuracy of their p...
research
02/05/2022

A Game-theoretic Understanding of Repeated Explanations in ML Models

This paper formally models the strategic repeated interactions between a...

Please sign up or login with your details

Forgot password? Click here to reset