Crowdsourcing Evaluation of Saliency-based XAI Methods

06/27/2021
by   Xiaotian Lu, et al.
0

Understanding the reasons behind the predictions made by deep neural networks is critical for gaining human trust in many important applications, which is reflected in the increasing demand for explainability in AI (XAI) in recent years. Saliency-based feature attribution methods, which highlight important parts of images that contribute to decisions by classifiers, are often used as XAI methods, especially in the field of computer vision. In order to compare various saliency-based XAI methods quantitatively, several approaches for automated evaluation schemes have been proposed; however, there is no guarantee that such automated evaluation metrics correctly evaluate explainability, and a high rating by an automated evaluation scheme does not necessarily mean a high explainability for humans. In this study, instead of the automated evaluation, we propose a new human-based evaluation scheme using crowdsourcing to evaluate XAI methods. Our method is inspired by a human computation game, "Peek-a-boom", and can efficiently compare different XAI methods by exploiting the power of crowds. We evaluate the saliency maps of various XAI methods on two datasets with automated and crowd-based evaluation schemes. Our experiments show that the result of our crowd-based evaluation scheme is different from those of automated evaluation schemes. In addition, we regard the crowd-based evaluation results as ground truths and provide a quantitative performance measure to compare different automated evaluation schemes. We also discuss the impact of crowd workers on the results and show that the varying ability of crowd workers does not significantly impact the results.

READ FULL TEXT

page 3

page 8

research
06/24/2021

Evaluation of Saliency-based Explainability Method

A particular class of Explainable AI (XAI) methods provide saliency maps...
research
07/29/2013

Herding the Crowd: Automated Planning for Crowdsourced Planning

There has been significant interest in crowdsourcing and human computati...
research
12/06/2021

What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods

A multitude of explainability methods and theoretical evaluation scores ...
research
05/25/2023

An Experimental Investigation into the Evaluation of Explainability Methods

EXplainable Artificial Intelligence (XAI) aims to help users to grasp th...
research
06/06/2019

Segment Integrated Gradients: Better attributions through regions

Saliency methods can aid understanding of deep neural networks. Recent y...
research
05/20/2022

Towards Better Understanding Attribution Methods

Deep neural networks are very successful on many vision tasks, but hard ...
research
03/21/2023

Better Understanding Differences in Attribution Methods via Systematic Evaluations

Deep neural networks are very successful on many vision tasks, but hard ...

Please sign up or login with your details

Forgot password? Click here to reset