Evaluation of Local Model-Agnostic Explanations Using Ground Truth

06/04/2021
by   Amir Hossein Akhavan Rahnama, et al.
0

Explanation techniques are commonly evaluated using human-grounded methods, limiting the possibilities for large-scale evaluations and rapid progress in the development of new techniques. We propose a functionally-grounded evaluation procedure for local model-agnostic explanation techniques. In our approach, we generate ground truth for explanations when the black-box model is Logistic Regression and Gaussian Naive Bayes and compare how similar each explanation is to the extracted ground truth. In our empirical study, explanations of Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Local Permutation Importance (LPI) are compared in terms of how similar they are to the extracted ground truth. In the case of Logistic Regression, we find that the performance of the explanation techniques is highly dependent on the normalization of the data. In contrast, Local Permutation Importance outperforms the other techniques on Naive Bayes, irrespective of normalization. We hope that this work lays the foundation for further research into functionally-grounded evaluation methods for explanation techniques.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/04/2022

Evaluating Local Model-Agnostic Explanations of Learning to Rank Models with Decision Paths

Local explanations of learning-to-rank (LTR) models are thought to extra...
06/16/2021

Best of both worlds: local and global explanations with human-understandable concepts

Interpretability techniques aim to provide the rationale behind a model'...
05/01/2019

Please Stop Permuting Features: An Explanation and Alternatives

This paper advocates against permute-and-predict (PaP) methods for inter...
08/29/2019

Human-grounded Evaluations of Explanation Methods for Text Classification

Due to the black-box nature of deep learning models, methods for explain...
07/20/2020

Towards Ground Truth Explainability on Tabular Data

In data science, there is a long history of using synthetic data for met...
11/18/2020

Data Representing Ground-Truth Explanations to Evaluate XAI Methods

Explainable artificial intelligence (XAI) methods are currently evaluate...
09/18/2020

On the Tractability of SHAP Explanations

SHAP explanations are a popular feature-attribution mechanism for explai...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.