Evaluating the Correctness of Explainable AI Algorithms for Classification

05/20/2021
by   Orcun Yalcin, et al.
0

Explainable AI has attracted much research attention in recent years with feature attribution algorithms, which compute "feature importance" in predictions, becoming increasingly popular. However, there is little analysis of the validity of these algorithms as there is no "ground truth" in the existing datasets to validate their correctness. In this work, we develop a method to quantitatively evaluate the correctness of XAI algorithms by creating datasets with known explanation ground truth. To this end, we focus on the binary classification problems. String datasets are constructed using formal language derived from a grammar. A string is positive if and only if a certain property is fulfilled. Symbols serving as explanation ground truth in a positive string are part of an explanation if and only if they contributes to fulfilling the property. Two popular feature attribution explainers, Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), are used in our experiments.We show that: (1) classification accuracy is positively correlated with explanation accuracy; (2) SHAP provides more accurate explanations than LIME; (3) explanation accuracy is negatively correlated with dataset complexity.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/04/2021

Evaluation of Local Model-Agnostic Explanations Using Ground Truth

Explanation techniques are commonly evaluated using human-grounded metho...
11/18/2020

Data Representing Ground-Truth Explanations to Evaluate XAI Methods

Explainable artificial intelligence (XAI) methods are currently evaluate...
11/14/2021

Scrutinizing XAI using linear ground-truth data with suppressor variables

Machine learning (ML) is increasingly often used to inform high-stakes d...
10/01/2020

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

As an emerging field in Machine Learning, Explainable AI (XAI) has been ...
05/05/2021

Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards

An emerging line of research in Explainable NLP is the creation of datas...
03/18/2021

Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset

Despite the increasingly successful application of neural networks to ma...
09/23/2020

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

For neural models to garner widespread public trust and ensure fairness,...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.