Towards Ground Truth Evaluation of Visual Explanations

03/16/2020
by   Ahmed Osman, et al.
4

Several methods have been proposed to explain the decisions of neural networks in the visual domain via saliency heatmaps (aka relevances/feature importance scores). Thus far, these methods were mainly validated on real-world images, using either pixel perturbation experiments or bounding box localization accuracies. In the present work, we propose instead to evaluate explanations in a restricted and controlled setup using a synthetic dataset of rendered 3D shapes. To this end, we generate a CLEVR-alike visual question answering benchmark with around 40,000 questions, where the ground truth pixel coordinates of relevant objects are known, which allows us to validate explanations in a fair and transparent way. We further introduce two straightforward metrics to evaluate explanations in this setup, and compare their outcomes to standard pixel perturbation using a Relation Network model and three decomposition-based explanation methods: Gradient x Input, Integrated Gradients and Layer-wise Relevance Propagation. Among the tested methods, Layer-wise Relevance Propagation was shown to perform best, followed by Integrated Gradients. More generally, we expect the release of our dataset and code to support the development and comparison of methods on a well-defined common ground.

READ FULL TEXT

page 14

page 19

page 20

page 21

page 22

research
03/04/2022

Evaluating Local Model-Agnostic Explanations of Learning to Rank Models with Decision Paths

Local explanations of learning-to-rank (LTR) models are thought to extra...
research
03/01/2023

Finding the right XAI method – A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science

Explainable artificial intelligence (XAI) methods shed light on the pred...
research
04/22/2020

Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations

The interest in complex deep neural networks for computer vision applica...
research
12/05/2018

Understanding Individual Decisions of CNNs via Contrastive Backpropagation

A number of backpropagation-based approaches such as DeConvNets, vanilla...
research
02/07/2023

Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation

Concept Bottleneck Models (CBMs) first map raw input(s) to a vector of h...
research
06/08/2023

AMEE: A Robust Framework for Explanation Evaluation in Time Series Classification

This paper aims to provide a framework to quantitatively evaluate and ra...
research
01/19/2018

Evaluating neural network explanation methods using hybrid documents and morphological prediction

We propose two novel paradigms for evaluating neural network explanation...

Please sign up or login with your details

Forgot password? Click here to reset