Contextual Semantic Interpretability

09/18/2020
by   Diego Marcos, et al.
12

Convolutional neural networks (CNN) are known to learn an image representation that captures concepts relevant to the task, but do so in an implicit way that hampers model interpretability. However, one could argue that such a representation is hidden in the neurons and can be made explicit by teaching the model to recognize semantically interpretable attributes that are present in the scene. We call such an intermediate layer a semantic bottleneck. Once the attributes are learned, they can be re-combined to reach the final decision and provide both an accurate prediction and an explicit reasoning behind the CNN decision. In this paper, we look into semantic bottlenecks that capture context: we want attributes to be in groups of a few meaningful elements and participate jointly to the final decision. We use a two-layer semantic bottleneck that gathers attributes into interpretable, sparse groups, allowing them contribute differently to the final output depending on the context. We test our contextual semantic interpretable bottleneck (CSIB) on the task of landscape scenicness estimation and train the semantic interpretable bottleneck using an auxiliary database (SUN Attributes). Our model yields in predictions as accurate as a non-interpretable baseline when applied to a real-world test set of Flickr images, all while providing clear and interpretable explanations for each prediction.

READ FULL TEXT

page 9

page 10

page 11

page 13

research
09/18/2019

Semantically Interpretable Activation Maps: what-where-how explanations within CNNs

A main issue preventing the use of Convolutional Neural Networks (CNN) i...
research
07/25/2019

Interpretability Beyond Classification Output: Semantic Bottleneck Networks

Today's deep learning systems deliver high performance based on end-to-e...
research
01/11/2021

Learning Semantically Meaningful Features for Interpretable Classifications

Learning semantically meaningful features is important for Deep Neural N...
research
09/21/2021

Learning Interpretable Concept Groups in CNNs

We propose a novel training methodology – Concept Group Learning (CGL) –...
research
04/27/2021

Explaining in Style: Training a GAN to explain a classifier in StyleSpace

Image classification models can depend on multiple different semantic at...
research
12/16/2014

Discovering beautiful attributes for aesthetic image analysis

Aesthetic image analysis is the study and assessment of the aesthetic pr...
research
08/31/2023

Unsupervised discovery of Interpretable Visual Concepts

Providing interpretability of deep-learning models to non-experts, while...

Please sign up or login with your details

Forgot password? Click here to reset