Evaluation of Saliency-based Explainability Method

A particular class of Explainable AI (XAI) methods provide saliency maps to highlight part of the image a Convolutional Neural Network (CNN) model looks at to classify the image as a way to explain its working. These methods provide an intuitive way for users to understand predictions made by CNNs. Other than quantitative computational tests, the vast majority of evidence to highlight that the methods are valuable is anecdotal. Given that humans would be the end-users of such methods, we devise three human subject experiments through which we gauge the effectiveness of these saliency-based explainability methods.

READ FULL TEXT

page 2

page 4

research
11/25/2022

Testing the effectiveness of saliency-based explainability in NLP using randomized survey-based experiments

As the applications of Natural Language Processing (NLP) in sensitive ar...
research
06/27/2021

Crowdsourcing Evaluation of Saliency-based XAI Methods

Understanding the reasons behind the predictions made by deep neural net...
research
09/20/2023

Signature Activation: A Sparse Signal View for Holistic Saliency

The adoption of machine learning in healthcare calls for model transpare...
research
06/16/2021

Explainable AI for Natural Adversarial Images

Adversarial images highlight how vulnerable modern image classifiers are...
research
08/03/2021

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

Conventional saliency maps highlight input features to which neural netw...
research
09/07/2020

Quantifying Explainability of Saliency Methods in Deep Neural Networks

One way to achieve eXplainable artificial intelligence (XAI) is through ...
research
07/21/2022

Explainable AI Algorithms for Vibration Data-based Fault Detection: Use Case-adadpted Methods and Critical Evaluation

Analyzing vibration data using deep neural network algorithms is an effe...

Please sign up or login with your details

Forgot password? Click here to reset