Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy

11/15/2022
by   Matteo Rizzo, et al.
0

The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help get a better understanding of the decision-making process of black-box models. However, some recent works challenged saliency's faithfulness in the field of Natural Language Processing (NLP), questioning attention weights' adherence to the true decision-making process of the model. We add to this discussion by evaluating the faithfulness of in-model saliency applied to a video processing task for the first time, namely, temporal colour constancy. We perform the evaluation by adapting to our target task two tests for faithfulness from recent NLP literature, whose methodology we refine as part of our contributions. We show that attention fails to achieve faithfulness, while confidence, a particular type of in-model visual saliency, succeeds.

READ FULL TEXT

page 1

page 4

research
10/13/2022

Constructing Natural Language Explanations via Saliency Map Verbalization

Saliency maps can explain a neural model's prediction by identifying imp...
research
07/17/2022

Towards Explainability in NLP: Analyzing and Calculating Word Saliency through Word Properties

The wide use of black-box models in natural language processing brings g...
research
11/15/2022

Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods

A popular approach to unveiling the black box of neural NLP models is to...
research
08/09/2023

Decoding Layer Saliency in Language Transformers

In this paper, we introduce a strategy for identifying textual saliency ...
research
03/30/2022

Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning

Artificial intelligence, particularly through recent advancements in dee...
research
11/20/2018

A Gray Box Interpretable Visual Debugging Approach for Deep Sequence Learning Model

Deep Learning algorithms are often used as black box type learning and t...
research
02/11/2020

Toward Improving the Evaluation of Visual Attention Models: a Crowdsourcing Approach

Human visual attention is a complex phenomenon. A computational modeling...

Please sign up or login with your details

Forgot password? Click here to reset