DeepAI
Log In Sign Up

Quantitative Evaluations on Saliency Methods: An Experimental Study

12/31/2020
by   Xiao-Hui Li, et al.
5

It has been long debated that eXplainable AI (XAI) is an important topic, but it lacks rigorous definition and fair metrics. In this paper, we briefly summarize the status quo of the metrics, along with an exhaustive experimental study based on them, including faithfulness, localization, false-positives, sensitivity check, and stability. With the experimental results, we conclude that among all the methods we compare, no single explanation method dominates others in all metrics. Nonetheless, Gradient-weighted Class Activation Mapping (Grad-CAM) and Randomly Input Sampling for Explanation (RISE) perform fairly well in most of the metrics. Utilizing a set of filtered metrics, we further present a case study to diagnose the classification bases for models. While providing a comprehensive experimental study of metrics, we also examine measuring factors that are missed in current metrics and hope this valuable work could serve as a guide for future research.

READ FULL TEXT

page 8

page 9

page 10

page 11

page 12

page 13

05/13/2022

Comparison of attention models and post-hoc explanation methods for embryo stage identification: a case study

An important limitation to the development of AI-based solutions for In ...
07/01/2021

Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?

There have been long-standing controversies and inconsistencies over the...
01/31/2022

Metrics for saliency map evaluation of deep learning explanation methods

Due to the black-box nature of deep learning models, there is a recent d...
10/13/2020

Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter

Understanding predictions made by Machine Learning models is critical in...
06/08/2020

Evaluation Criteria for Instance-based Explanation

Explaining predictions made by complex machine learning models helps use...
04/20/2021

Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis

As the request for deep learning solutions increases, the need for expla...