Saliency Maps Generation for Automatic Text Summarization

07/12/2019
by   David Tuckey, et al.
0

Saliency map generation techniques are at the forefront of explainable AI literature for a broad range of machine learning applications. Our goal is to question the limits of these approaches on more complex tasks. In this paper we apply Layer-Wise Relevance Propagation (LRP) to a sequence-to-sequence attention model trained on a text summarization dataset. We obtain unexpected saliency maps and discuss the rightfulness of these "explanations". We argue that we need a quantitative way of testing the counterfactual case to judge the truthfulness of the saliency maps. We suggest a protocol to check the validity of the importance attributed to the input and show that the saliency maps obtained sometimes capture the real use of the input features by the network, and sometimes do not. We use this example to discuss how careful we need to be when accepting them as explanation.

READ FULL TEXT
research
10/16/2019

Global Saliency: Aggregating Saliency Maps to Assess Dataset Artefact Bias

In high-stakes applications of machine learning models, interpretability...
research
12/09/2019

Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep RL

Saliency maps have been used to support explanations of deep reinforceme...
research
06/08/2021

Investigating sanity checks for saliency maps with image and text classification

Saliency maps have shown to be both useful and misleading for explaining...
research
03/29/2020

Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models

Pre-trained sequence-to-sequence (seq-to-seq) models have significantly ...
research
10/12/2020

The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?

There is a recent surge of interest in using attention as explanation of...
research
10/13/2022

Constructing Natural Language Explanations via Saliency Map Verbalization

Saliency maps can explain a neural model's prediction by identifying imp...
research
11/26/2019

Efficient Saliency Maps for Explainable AI

We describe an explainable AI saliency map method for use with deep conv...

Please sign up or login with your details

Forgot password? Click here to reset