Evaluating the Correctness of Explainable AI Algorithms for Classification

by   Orcun Yalcin, et al.

Explainable AI has attracted much research attention in recent years with feature attribution algorithms, which compute "feature importance" in predictions, becoming increasingly popular. However, there is little analysis of the validity of these algorithms as there is no "ground truth" in the existing datasets to validate their correctness. In this work, we develop a method to quantitatively evaluate the correctness of XAI algorithms by creating datasets with known explanation ground truth. To this end, we focus on the binary classification problems. String datasets are constructed using formal language derived from a grammar. A string is positive if and only if a certain property is fulfilled. Symbols serving as explanation ground truth in a positive string are part of an explanation if and only if they contributes to fulfilling the property. Two popular feature attribution explainers, Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), are used in our experiments.We show that: (1) classification accuracy is positively correlated with explanation accuracy; (2) SHAP provides more accurate explanations than LIME; (3) explanation accuracy is negatively correlated with dataset complexity.


page 1

page 2

page 3

page 4


Evaluating Local Model-Agnostic Explanations of Learning to Rank Models with Decision Paths

Local explanations of learning-to-rank (LTR) models are thought to extra...

Evaluation of Local Model-Agnostic Explanations Using Ground Truth

Explanation techniques are commonly evaluated using human-grounded metho...

Data Representing Ground-Truth Explanations to Evaluate XAI Methods

Explainable artificial intelligence (XAI) methods are currently evaluate...

Rank-LIME: Local Model-Agnostic Feature Attribution for Learning to Rank

Understanding why a model makes certain predictions is crucial when adap...

Accelerating Shapley Explanation via Contributive Cooperator Selection

Even though Shapley value provides an effective explanation for a DNN mo...

The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers

Interpretable part-prototype models are computer vision models that are ...

Please sign up or login with your details

Forgot password? Click here to reset