Statistical stability indices for LIME: obtaining reliable explanations for Machine Learning models

01/31/2020
by   Giorgio Visani, et al.
0

Nowadays we are witnessing a transformation of the business processes towards a more computation driven approach. The ever increasing usage of Machine Learning techniques is the clearest example of such trend. This sort of revolution is often providing advantages, such as an increase in prediction accuracy and a reduced time to obtain the results. However, these methods present a major drawback: it is very difficult to understand on what grounds the algorithm took the decision. To address this issue we consider the LIME method. We give a general background on LIME then, we focus on the stability issue: employing the method repeated times, under the same conditions, may yield to different explanations. Two complementary indices are proposed, to measure LIME stability. It is important for the practitioner to be aware of the issue, as well as to have a tool for spotting it. Stability guarantees LIME explanations to be reliable, therefore a stability assessment, made through the proposed indices, is crucial. As a case study, we apply both Machine Learning and classical statistical techniques to Credit Risk data. We test LIME on the Machine Learning algorithm and check its stability. Eventually, we examine the goodness of the explanations returned.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2020

Explanations of Machine Learning predictions: a mandatory step for its application to Operational Processes

In the global economy, credit companies play a central role in economic ...
research
04/24/2022

An empirical study of the effect of background data size on the stability of SHapley Additive exPlanations (SHAP) for deep learning models

Nowadays, the interpretation of why a machine learning (ML) model makes ...
research
02/20/2021

Measuring the Stability of Learned Features

Many modern datasets don't fit neatly into n × p matrices, but most tech...
research
01/13/2020

Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness

Many proposed methods for explaining machine learning predictions are in...
research
09/15/2022

Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP

Context: The identification of bugs within the reported issues in an iss...
research
01/14/2023

Dance of the DAOs: Building Data Assets as a Use Case

Decentralised Autonomous Organisations (DAOs) have recently piqued the i...

Please sign up or login with your details

Forgot password? Click here to reset