Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

02/03/2020
by   Ahmed Alqaraawi, et al.
0

Convolutional neural networks (CNNs) offer great machine learning performance over a range of applications, but their operation is hard to interpret, even for experts. Various explanation algorithms have been proposed to address this issue, yet limited research effort has been reported concerning their user evaluation. In this paper, we report on an online between-group user study designed to evaluate the performance of "saliency maps" - a popular explanation algorithm for image classification applications of CNNs. Our results indicate that saliency maps produced by the LRP algorithm helped participants to learn about some specific image features the system is sensitive to. However, the maps seem to provide very limited help for participants to anticipate the network's output for new images. Drawing on our findings, we highlight implications for design and further research on explainable AI. In particular, we argue the HCI and AI communities should look beyond instance-level explanations.

READ FULL TEXT

page 1

page 4

research
05/17/2022

A psychological theory of explainability

The goal of explainable Artificial Intelligence (XAI) is to generate hum...
research
09/29/2020

Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach

Convolutional neural networks (CNNs) are commonly used for image classif...
research
02/02/2023

Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses

Many visualizations have been developed for explainable AI (XAI), but th...
research
01/31/2022

Metrics for saliency map evaluation of deep learning explanation methods

Due to the black-box nature of deep learning models, there is a recent d...
research
07/11/2021

One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images

Being able to explain the prediction to clinical end-users is a necessit...
research
06/16/2021

Explainable AI for Natural Adversarial Images

Adversarial images highlight how vulnerable modern image classifiers are...
research
05/18/2018

A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations

Backpropagation-based visualizations have been proposed to interpret con...

Please sign up or login with your details

Forgot password? Click here to reset