Quantitative Evaluations on Saliency Methods: An Experimental Study

12/31/2020
by   Xiao-Hui Li, et al.
5

It has been long debated that eXplainable AI (XAI) is an important topic, but it lacks rigorous definition and fair metrics. In this paper, we briefly summarize the status quo of the metrics, along with an exhaustive experimental study based on them, including faithfulness, localization, false-positives, sensitivity check, and stability. With the experimental results, we conclude that among all the methods we compare, no single explanation method dominates others in all metrics. Nonetheless, Gradient-weighted Class Activation Mapping (Grad-CAM) and Randomly Input Sampling for Explanation (RISE) perform fairly well in most of the metrics. Utilizing a set of filtered metrics, we further present a case study to diagnose the classification bases for models. While providing a comprehensive experimental study of metrics, we also examine measuring factors that are missed in current metrics and hope this valuable work could serve as a guide for future research.

READ FULL TEXT

page 8

page 9

page 10

page 11

page 12

page 13

research
07/31/2023

Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI

While research on explainable AI (XAI) is booming and explanation techni...
research
07/01/2021

Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?

There have been long-standing controversies and inconsistencies over the...
research
05/13/2022

Comparison of attention models and post-hoc explanation methods for embryo stage identification: a case study

An important limitation to the development of AI-based solutions for In ...
research
05/25/2023

An Experimental Investigation into the Evaluation of Explainability Methods

EXplainable Artificial Intelligence (XAI) aims to help users to grasp th...
research
01/31/2022

Metrics for saliency map evaluation of deep learning explanation methods

Due to the black-box nature of deep learning models, there is a recent d...
research
10/13/2020

Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter

Understanding predictions made by Machine Learning models is critical in...

Please sign up or login with your details

Forgot password? Click here to reset