Explaining the Explainer: A First Theoretical Analysis of LIME

01/10/2020
by   Damien Garreau, et al.
0

Machine learning is used more and more often for sensitive applications, sometimes replacing humans in critical decision-making processes. As such, interpretability of these algorithms is a pressing need. One popular algorithm to provide interpretability is LIME (Local Interpretable Model-Agnostic Explanation). In this paper, we provide the first theoretical analysis of LIME. We derive closed-form expressions for the coefficients of the interpretable model when the function to explain is linear. The good news is that these coefficients are proportional to the gradient of the function to explain: LIME indeed discovers meaningful features. However, our analysis also reveals that poor choices of parameters can lead LIME to miss important features.

READ FULL TEXT
research
10/23/2020

An Analysis of LIME for Text Data

Text data are increasingly handled in an automated fashion by machine le...
research
08/25/2020

Looking deeper into LIME

Interpretability of machine learning algorithm is a pressing need. Numer...
research
07/16/2021

A Theoretical Analysis of Granulometry-based Roughness Measures on Cartosat DEMs

The study of water bodies such as rivers is an important problem in the ...
research
03/15/2023

Understanding Post-hoc Explainers: The Case of Anchors

In many scenarios, the interpretability of machine learning models is a ...
research
12/27/2022

On the Equivalence of the Weighted Tsetlin Machine and the Perceptron

Tsetlin Machine (TM) has been gaining popularity as an inherently interp...
research
03/27/2020

A copula-based visualization technique for a neural network

Interpretability of machine learning is defined as the extent to which h...

Please sign up or login with your details

Forgot password? Click here to reset