Structured Attention Graphs for Understanding Deep Image Classifications

11/13/2020
by   Vivswan Shitole, et al.
44

Attention maps are a popular way of explaining the decisions of convolutional networks for image classification. Typically, for each image of interest, a single attention map is produced, which assigns weights to pixels based on their importance to the classification. A single attention map, however, provides an incomplete understanding since there are often many other maps that explain a classification equally well. In this paper, we introduce structured attention graphs (SAGs), which compactly represent sets of attention maps for an image by capturing how different combinations of image regions impact a classifier's confidence. We propose an approach to compute SAGs and a visualization for SAGs so that deeper insight can be gained into a classifier's decisions. We conduct a user study comparing the use of SAGs to traditional attention maps for answering counterfactual questions about image classifications. Our results show that the users are more correct when answering comparative counterfactual questions based on SAGs compared to the baselines.

READ FULL TEXT

page 11

page 13

page 14

page 16

page 17

page 18

page 20

page 21

research
04/16/2020

Explainable Image Classification with Evidence Counterfactual

The complexity of state-of-the-art modeling techniques for image classif...
research
10/07/2022

CLEAR: Causal Explanations from Attention in Neural Recommenders

We present CLEAR, a method for learning session-specific causal graphs, ...
research
07/31/2020

A Novel Global Spatial Attention Mechanism in Convolutional Neural Network for Medical Image Classification

Spatial attention has been introduced to convolutional neural networks (...
research
09/13/2021

From Heatmaps to Structural Explanations of Image Classifiers

This paper summarizes our endeavors in the past few years in terms of ex...
research
09/22/2022

Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism

In this paper two new learning-based eXplainable AI (XAI) methods for de...
research
03/21/2023

Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models

The performance of convolutional neural networks has continued to improv...

Please sign up or login with your details

Forgot password? Click here to reset