Semantically Interpretable Activation Maps: what-where-how explanations within CNNs

09/18/2019
by   Diego Marcos, et al.
6

A main issue preventing the use of Convolutional Neural Networks (CNN) in end user applications is the low level of transparency in the decision process. Previous work on CNN interpretability has mostly focused either on localizing the regions of the image that contribute to the result or on building an external model that generates plausible explanations. However, the former does not provide any semantic information and the latter does not guarantee the faithfulness of the explanation. We propose an intermediate representation composed of multiple Semantically Interpretable Activation Maps (SIAM) indicating the presence of predefined attributes at different locations of the image. These attribute maps are then linearly combined to produce the final output. This gives the user insight into what the model has seen, where, and a final output directly linked to this information in a comprehensive and interpretable way. We test the method on the task of landscape scenicness (aesthetic value) estimation, using an intermediate representation of 33 attributes from the SUN Attributes database. The results confirm that SIAM makes it possible to understand what attributes in the image are contributing to the final score and where they are located. Since it is based on learning from multiple tasks and datasets, SIAM improve the explanability of the prediction without additional annotation efforts or computational overhead at inference time, while keeping good performances on both the final and intermediate tasks.

READ FULL TEXT

page 4

page 6

page 7

page 8

research
09/18/2020

Contextual Semantic Interpretability

Convolutional neural networks (CNN) are known to learn an image represen...
research
07/13/2017

Learning Photography Aesthetics with Deep CNNs

Automatic photo aesthetic assessment is a challenging artificial intelli...
research
05/26/2019

Why do These Match? Explaining the Behavior of Image Similarity Models

Explaining a deep learning model can help users understand its behavior ...
research
04/27/2021

Explaining in Style: Training a GAN to explain a classifier in StyleSpace

Image classification models can depend on multiple different semantic at...
research
06/04/2021

Improve the Interpretability of Attention: A Fast, Accurate, and Interpretable High-Resolution Attention Model

The prevalence of employing attention mechanisms has brought along conce...
research
11/23/2019

On Symbiosis of Attribute Prediction and Semantic Segmentation

In this paper, we propose to employ semantic segmentation to improve per...
research
07/25/2022

Effective and Interpretable Information Aggregation with Capacity Networks

How to aggregate information from multiple instances is a key question m...

Please sign up or login with your details

Forgot password? Click here to reset