To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

06/01/2021
by   Elvio G. Amparore, et al.
0

The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is no consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations - with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.

READ FULL TEXT

page 1

page 5

research
11/11/2022

REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study

Explainable artificial intelligence is proposed to provide explanations ...
research
03/15/2022

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

Explainable Artificial Intelligence (XAI) is an emerging research field ...
research
04/30/2022

ExSum: From Local Explanations to Model Understanding

Interpretability methods are developed to understand the working mechani...
research
12/17/2022

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Deep learning models for learning analytics have become increasingly pop...
research
08/06/2021

Interpretable Summaries of Black Box Incident Triaging with Subgroup Discovery

The need of predictive maintenance comes with an increasing number of in...
research
01/31/2023

A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics

Deep visual models have widespread applications in high-stake domains. H...
research
03/22/2021

Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

There has been a recent resurgence of interest in explainable artificial...

Please sign up or login with your details

Forgot password? Click here to reset