An Analysis of LIME for Text Data

10/23/2020
by   Dina Mardaoui, et al.
0

Text data are increasingly handled in an automated fashion by machine learning algorithms. But the models handling these data are not always well-understood due to their complexity and are more and more often referred to as "black-boxes." Interpretability methods aim to explain how these models operate. Among them, LIME has become one of the most popular in recent years. However, it comes without theoretical guarantees: even for simple models, we are not sure that LIME behaves accurately. In this paper, we provide a first theoretical analysis of LIME for text data. As a consequence of our theoretical findings, we show that LIME indeed provides meaningful explanations for simple models, namely decision trees and linear models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2023

Understanding Post-hoc Explainers: The Case of Anchors

In many scenarios, the interpretability of machine learning models is a ...
research
01/10/2020

Explaining the Explainer: A First Theoretical Analysis of LIME

Machine learning is used more and more often for sensitive applications,...
research
08/25/2020

Looking deeper into LIME

Interpretability of machine learning algorithm is a pressing need. Numer...
research
07/04/2022

Comparing Feature Importance and Rule Extraction for Interpretability on Text Data

Complex machine learning algorithms are used more and more often in crit...
research
05/27/2022

A Sea of Words: An In-Depth Analysis of Anchors for Text Data

Anchors [Ribeiro et al. (2018)] is a post-hoc, rule-based interpretabili...
research
06/29/2020

Handling Missing Data in Decision Trees: A Probabilistic Approach

Decision trees are a popular family of models due to their attractive pr...
research
04/29/2020

Interpretable Random Forests via Rule Extraction

We introduce SIRUS (Stable and Interpretable RUle Set) for regression, a...

Please sign up or login with your details

Forgot password? Click here to reset