DeepAI AI Chat
Log In Sign Up

Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional Benchmark

by   Mohamed Karim Belaid, et al.

In recent years, Explainable AI (xAI) attracted a lot of attention as various countries turned explanations into a legal right. xAI allows for improving models beyond the accuracy metric by, e.g., debugging the learned pattern and demystifying the AI's behavior. The widespread use of xAI brought new challenges. On the one hand, the number of published xAI algorithms underwent a boom, and it became difficult for practitioners to select the right tool. On the other hand, some experiments did highlight how easy data scientists could misuse xAI algorithms and misinterpret their results. To tackle the issue of comparing and correctly using feature importance xAI algorithms, we propose Compare-xAI, a benchmark that unifies all exclusive and unitary evaluation methods applied to xAI algorithms. We propose a selection protocol to shortlist non-redundant unit tests from the literature, i.e., each targeting a specific problem in explaining a model. The benchmark encapsulates the complexity of evaluating xAI methods into a hierarchical scoring of three levels, namely, targeting three end-user groups: researchers, practitioners, and laymen in xAI. The most detailed level provides one score per unit test. The second level regroups tests into five categories (fidelity, fragility, stability, simplicity, and stress tests). The last level is the aggregated comprehensibility score, which encapsulates the ease of correctly interpreting the algorithm's output in one easy to compare value. Compare-xAI's interactive user interface helps mitigate errors in interpreting xAI results by quickly listing the recommended xAI solutions for each ML task and their current limitations. The benchmark is made available at


OmniXAI: A Library for Explainable AI

We introduce OmniXAI, an open-source Python library of eXplainable AI (X...

Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?

Algorithmic approaches to interpreting machine learning models have prol...

The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective

As various post hoc explanation methods are increasingly being leveraged...

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations

There have been several research works proposing new Explainable AI (XAI...

Achievements and Challenges in Explaining Deep Learning based Computer-Aided Diagnosis Systems

Remarkable success of modern image-based AI methods and the resulting in...

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Explainable AI (XAI) is a rapidly evolving field that aims to improve tr...

Even if Explanations: Prior Work, Desiderata Benchmarks for Semi-Factual XAI

Recently, eXplainable AI (XAI) research has focused on counterfactual ex...