Why model why? Assessing the strengths and limitations of LIME

11/30/2020
by   Jürgen Dieber, et al.
0

When it comes to complex machine learning models, commonly referred to as black boxes, understanding the underlying decision making process is crucial for domains such as healthcare and financial services, and also when it is used in connection with safety critical systems such as autonomous vehicles. As such interest in explainable artificial intelligence (xAI) tools and techniques has increased in recent years. However, the effectiveness of existing xAI frameworks, especially concerning algorithms that work with data as opposed to images, is still an open research question. In order to address this gap, in this paper we examine the effectiveness of the Local Interpretable Model-Agnostic Explanations (LIME) xAI framework, one of the most popular model agnostic frameworks found in the literature, with a specific focus on its performance in terms of making tabular models more interpretable. In particular, we apply several state of the art machine learning algorithms on a tabular dataset, and demonstrate how LIME can be used to supplement conventional performance assessment methods. In addition, we evaluate the understandability of the output produced by LIME both via a usability study, involving participants who are not familiar with LIME, and its overall usability via an assessment framework, which is derived from the International Organisation for Standardisation 9241-11:1998 standard.

READ FULL TEXT
research
06/06/2023

Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey

Explainable artificial intelligence (XAI) methods are portrayed as a rem...
research
03/07/2021

Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications

There has been a growing interest in model-agnostic methods that can mak...
research
06/21/2023

Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis Detection

Interpreting machine learning models remains a challenge, hindering thei...
research
09/01/2023

Interpretable Medical Imagery Diagnosis with Self-Attentive Transformers: A Review of Explainable AI for Health Care

Recent advancements in artificial intelligence (AI) have facilitated its...
research
05/16/2022

Model Agnostic Local Explanations of Reject

The application of machine learning based decision making systems in saf...
research
06/30/2017

Fairer and more accurate, but for whom?

Complex statistical machine learning models are increasingly being used ...

Please sign up or login with your details

Forgot password? Click here to reset