Understanding Deep Architectures by Interpretable Visual Summaries

01/27/2018
by   Marco Carletti, et al.
0

A consistent body of research investigates the recurrent visual patterns exploited by deep networks for object classification with the help of diverse visualization techniques. Unfortunately, no effort has been spent in showing that these techniques are effective in leading researchers to univocal and exhaustive explanations. This paper goes in this direction, presenting a visualization framework owing to a group of clusters or summaries, each one formed by crisp image regions focusing on a particular part that the network has exploited with high regularity to classify a given class. In most of the cases, these parts carry a semantic meaning, making the explanation simple and universal. For example, the method suggests that AlexNet, when classifying the ImageNet class "robin", is very sensible to the patterns of the head, the body, the legs, the wings and the tail, providing five summaries where these parts are consistently highlighted. The approach is composed by a sparse optimization step providing sharp image masks whose perturbation causes high loss in the classification. Regions composing the masks are then clustered together by means of a proposal flow-based similarity score, that associates visually similar patterns of diverse objects which are in corresponding positions. The final clusters are visual summaries easy to be interpreted, as found by the very first user study of this kind. The summaries can be also used to compare different architectures: for example, the superiority of GoogleNet w.r.t. AlexNet is explained by our approach since the former gives rise to more summaries, indicating its ability in capturing a higher number of diverse semantic parts.

READ FULL TEXT

page 1

page 4

page 6

research
05/06/2019

Deep Visual City Recognition Visualization

Understanding how cities visually differ from each others is interesting...
research
08/31/2023

Unsupervised discovery of Interpretable Visual Concepts

Providing interpretability of deep-learning models to non-experts, while...
research
08/13/2020

Towards Visually Explaining Similarity Models

We consider the problem of visually explaining similarity models, i.e., ...
research
01/15/2021

Attention Based Video Summaries of Live Online Zoom Classes

This paper describes a system developed to help University students get ...
research
04/01/2023

Conveying Autonomous Robot Capabilities through Contrasting Behaviour Summaries

As advances in artificial intelligence enable increasingly capable learn...
research
02/06/2017

Textually Customized Video Summaries

The best summary of a long video differs among different people due to i...
research
06/09/2021

Towards Explainable Abnormal Infant Movements Identification: A Body-part Based Prediction and Visualisation Framework

Providing early diagnosis of cerebral palsy (CP) is key to enhancing the...

Please sign up or login with your details

Forgot password? Click here to reset