DeepAI
Log In Sign Up

Contextual Semantic Interpretability

09/18/2020
by   Diego Marcos, et al.
12

Convolutional neural networks (CNN) are known to learn an image representation that captures concepts relevant to the task, but do so in an implicit way that hampers model interpretability. However, one could argue that such a representation is hidden in the neurons and can be made explicit by teaching the model to recognize semantically interpretable attributes that are present in the scene. We call such an intermediate layer a semantic bottleneck. Once the attributes are learned, they can be re-combined to reach the final decision and provide both an accurate prediction and an explicit reasoning behind the CNN decision. In this paper, we look into semantic bottlenecks that capture context: we want attributes to be in groups of a few meaningful elements and participate jointly to the final decision. We use a two-layer semantic bottleneck that gathers attributes into interpretable, sparse groups, allowing them contribute differently to the final output depending on the context. We test our contextual semantic interpretable bottleneck (CSIB) on the task of landscape scenicness estimation and train the semantic interpretable bottleneck using an auxiliary database (SUN Attributes). Our model yields in predictions as accurate as a non-interpretable baseline when applied to a real-world test set of Flickr images, all while providing clear and interpretable explanations for each prediction.

READ FULL TEXT

page 9

page 10

page 11

page 13

09/18/2019

Semantically Interpretable Activation Maps: what-where-how explanations within CNNs

A main issue preventing the use of Convolutional Neural Networks (CNN) i...
07/25/2019

Interpretability Beyond Classification Output: Semantic Bottleneck Networks

Today's deep learning systems deliver high performance based on end-to-e...
01/11/2021

Learning Semantically Meaningful Features for Interpretable Classifications

Learning semantically meaningful features is important for Deep Neural N...
09/21/2021

Learning Interpretable Concept Groups in CNNs

We propose a novel training methodology – Concept Group Learning (CGL) –...
04/27/2021

Explaining in Style: Training a GAN to explain a classifier in StyleSpace

Image classification models can depend on multiple different semantic at...
12/16/2014

Discovering beautiful attributes for aesthetic image analysis

Aesthetic image analysis is the study and assessment of the aesthetic pr...
05/08/2020

Attentional Bottleneck: Towards an Interpretable Deep Driving Network

Deep neural networks are a key component of behavior prediction and moti...

Code Repositories

CSIB

Contextual Semantic Interpretability, ACCV 2020


view repo