Network Dissection: Quantifying Interpretability of Deep Visual Representations

04/19/2017
by   David Bau, et al.
0

We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self-supervised training tasks. We further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 7

research
11/15/2017

Interpreting Deep Visual Representations via Network Dissection

The success of recent deep convolutional neural networks (CNNs) depends ...
research
09/10/2020

Understanding the Role of Individual Units in a Deep Neural Network

Deep neural networks excel at finding hierarchical representations that ...
research
06/15/2023

Rosetta Neurons: Mining the Common Units in a Model Zoo

Do different neural networks, trained for various vision tasks, share so...
research
04/10/2022

Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention

Interpretability is an important property for visual models as it helps ...
research
02/18/2019

Discovery of Natural Language Concepts in Individual Units of CNNs

Although deep convolutional networks have achieved improved performance ...
research
03/22/2022

Clustering units in neural networks: upstream vs downstream information

It has been hypothesized that some form of "modular" structure in artifi...
research
03/19/2018

On the importance of single directions for generalization

Despite their ability to memorize large datasets, deep neural networks o...

Please sign up or login with your details

Forgot password? Click here to reset