Quantifying Explainability of Saliency Methods in Deep Neural Networks

09/07/2020
by   Erico Tjoa, et al.
19

One way to achieve eXplainable artificial intelligence (XAI) is through the use of post-hoc analysis methods. In particular, methods that generate heatmaps have been used to explain black-box models, such as deep neural network. In some cases, heatmaps are appealing due to the intuitive and visual ways to understand them. However, quantitative analysis that demonstrates the actual potential of heatmaps have been lacking, and comparison between different methods are not standardized as well. In this paper, we introduce a synthetic data that can be generated adhoc along with the ground-truth heatmaps for better quantitative assessment. Each sample data is an image of a cell with easily distinguishable features, facilitating a more transparent assessment of different XAI methods. Comparison and recommendations are made, shortcomings are clarified along with suggestions for future research directions to handle the finer details of select post-hoc analysis methods.

READ FULL TEXT

page 2

page 14

page 15

page 17

page 18

page 19

page 21

page 23

research
08/15/2020

Explainability in Deep Reinforcement Learning

A large set of the explainable Artificial Intelligence (XAI) literature ...
research
06/24/2021

Towards Fully Interpretable Deep Neural Networks: Are We There Yet?

Despite the remarkable performance, Deep Neural Networks (DNNs) behave a...
research
06/24/2021

Evaluation of Saliency-based Explainability Method

A particular class of Explainable AI (XAI) methods provide saliency maps...
research
10/16/2019

Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

There has been a significant surge of interest recently around the conce...
research
10/16/2019

Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

There has been a significant surge of interest recently around the conce...
research
03/21/2023

Better Understanding Differences in Attribution Methods via Systematic Evaluations

Deep neural networks are very successful on many vision tasks, but hard ...
research
04/05/2023

How good Neural Networks interpretation methods really are? A quantitative benchmark

Saliency Maps (SMs) have been extensively used to interpret deep learnin...

Please sign up or login with your details

Forgot password? Click here to reset