DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems

06/24/2019
by   Muhammad Rehman Zafar, et al.
5

Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically generates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g. linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation and feature selection methods result in "instability" in the generated explanations, where for the same prediction, different explanations can be generated. This is a critical issue that can prevent deployment of LIME in a Computer-Aided Diagnosis (CAD) system, where stability is of utmost importance to earn the trust of medical professionals. In this paper, we propose a deterministic version of LIME. Instead of random perturbation, we utilize agglomerative Hierarchical Clustering (HC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a linear model is trained over the selected cluster to generate the explanations. Experimental results on three different medical datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability of DLIME compared to LIME utilizing the Jaccard similarity among multiple generated explanations.

READ FULL TEXT
research
06/10/2020

OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms

Local Interpretable Model-Agnostic Explanations (LIME) is a popular meth...
research
11/02/2022

XAI-Increment: A Novel Approach Leveraging LIME Explanations for Improved Incremental Learning

Explainability of neural network prediction is essential to understand f...
research
05/24/2021

Deep Descriptive Clustering

Recent work on explainable clustering allows describing clusters when th...
research
08/16/2021

Locally Interpretable Model Agnostic Explanations using Gaussian Processes

Owing to tremendous performance improvements in data-intensive domains, ...
research
11/03/2020

MAIRE – A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers

The paper introduces a novel framework for extracting model-agnostic hum...
research
09/04/2019

ALIME: Autoencoder Based Approach for Local Interpretability

Machine learning and especially deep learning have garneredtremendous po...

Please sign up or login with your details

Forgot password? Click here to reset