Interactive Naming for Explaining Deep Neural Networks: A Formative Study

12/18/2018
by   Mandana Hamidi-Haines, et al.
18

We consider the problem of explaining the decisions of deep neural networks for image recognition in terms of human-recognizable visual concepts. In particular, given a test set of images, we aim to explain each classification in terms of a small number of image regions, or activation maps, which have been associated with semantic concepts by a human annotator. This allows for generating summary views of the typical reasons for classifications, which can help build trust in a classifier and/or identify example types for which the classifier may not be trusted. For this purpose, we developed a user interface for "interactive naming," which allows a human annotator to manually cluster significant activation maps in a test set into meaningful groups called "visual concepts". The main contribution of this paper is a systematic study of the visual concepts produced by five human annotators using the interactive naming interface. In particular, we consider the adequacy of the concepts for explaining the classification of test-set images, correspondence of the concepts to activations of individual neurons, and the inter-annotator agreement of visual concepts. We find that a large fraction of the activation maps have recognizable visual concepts, and that there is significant agreement between the different annotators about their denotations. Our work is an exploratory study of the interplay between machine learning and human recognition mediated by visualizations of the results of learning.

READ FULL TEXT

page 4

page 5

page 7

research
02/02/2018

Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations

Deep neural networks are complex and opaque. As they enter application i...
research
03/07/2022

Explaining Classifiers by Constructing Familiar Concepts

Interpreting a large number of neurons in deep learning is difficult. Ou...
research
05/04/2020

Explaining AI-based Decision Support Systems using Concept Localization Maps

Human-centric explainability of AI-based Decision Support Systems (DSS) ...
research
10/09/2020

Explaining Clinical Decision Support Systems in Medical Imaging using Cycle-Consistent Activation Maximization

Clinical decision support using deep neural networks has become a topic ...
research
06/24/2021

Meaningfully Explaining a Model's Mistakes

Understanding and explaining the mistakes made by trained models is crit...
research
08/29/2021

NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks

Existing research on making sense of deep neural networks often focuses ...
research
08/23/2023

Concept Bottleneck with Visual Concept Filtering for Explainable Medical Image Classification

Interpretability is a crucial factor in building reliable models for var...

Please sign up or login with your details

Forgot password? Click here to reset