Explanatory Graphs for CNNs

12/18/2018 ∙ by Quanshi Zhang, et al. ∙ 24

This paper introduces a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside conv-layers of a pre-trained CNN. Each filter in a conv-layer of a CNN for object classification usually represents a mixture of object parts. We develop a simple yet effective method to disentangle object-part pattern components from each filter. We construct an explanatory graph to organize the mined part patterns, where a node represents a part pattern, and each edge encodes co-activation relationships and spatial relationships between patterns. More crucially, given a pre-trained CNN, the explanatory graph is learned without a need of annotating object parts. Experiments show that each graph node consistently represented the same object part through different images, which boosted the transferability of CNN features. We transferred part patterns in the explanatory graph to the task of part localization, and our method significantly outperformed other approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

page 7

page 8

page 9

page 13

page 14

Code Repositories

explanatoryGraph

Interpreting CNN Knowledge via an Explanatory Graph


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Convolutional neural networks (CNNs) [15, 12, 9] have exhibited superior performance in various visual tasks, for example, object classification and detection. In comparison, explaining features in middle conv-layers of a CNN has presented continuous challenges for decades. When a CNN is trained for object classification, its conv-layers have encoded rich implicit patterns of object parts and patterns of textures. Therefore, this research aims to provide a global analysis of how visual knowledge is organized in a pre-trained CNN:

  • How many patterns can activate a certain convolutional filter of the CNN? For example, the filter may be triggered by both a specific object-part pattern or a certain textural pattern.

  • Which patterns are co-activated to describe an object part?

  • What is the spatial relationship between two co-activated patterns?

Given a CNN pre-trained for object classification, in this paper, we propose a method (i) to mine object-part patterns from intermediate conv-layers and (ii) to organize these patterns in an explanatory graph.

As shown in Fig. 1, the explanatory graph encodes the knowledge hierarchy hidden inside the CNN, as follows.

  • The explanatory graph has multiple layers, which correspond to different conv-layers of the CNN.

  • Each graph layer has many nodes. We use graph nodes in a layer to represent all candidate part patterns that can activate the feature map of the corresponding conv-layer.

  • Because a filter in the conv-layer may be potentially triggered by multiple parts of the object, we disentangle different part patterns from the same filter, which are represented as different graph nodes.

  • A graph edge connects two nodes in adjacent layers to encode co-activation logics and spatial relationships between them.

  • We can regard the explanatory graph as a dictionary, which summarizes the part knowledge hidden inside hundreds of thousands of chaotic neural activations of a conv-layer into thousands of graph nodes.

  • During the inference process, given feature maps of an input image, our method selects a small number of nodes from the explanatory graph and assigns these nodes with certain neural activations in the feature map. We consider these nodes are activated to explain which part patterns are hidden behind the neural activations. Each graph node consistently corresponds to the same part over different input object images.

Note that the location of each pattern (node) is not fixed to a specific neural activation unit during the inference process. Instead, given different input images, a part pattern may appear on various locations of a filter’s feature maps11footnotemark: 1. For example, the ear pattern and the face pattern of a horse in Fig. 3 can appear on different locations of different images, but they are co-activated and keep certain spatial relationships.

Fig. 1: An explanatory graph represents knowledge hierarchy hidden in conv-layers of a CNN. Each filter in a pre-trained CNN may be activated by different object parts. Our method disentangles part patterns from each filter in an unsupervised manner, thereby clarifying the knowledge representation.

• Disentangling object parts from a single filter is the core technique of building an explanatory graph. As shown in Fig. 1, a filter in a conv-layer may be activated by different object parts (e.g. the filter’s feature map11footnotemark: 1 may be activated by both the head and the neck of a horse).

In this study, we hope to develop a simple yet effective method to automatically disentangle different part patterns from a single filter without using any annotations of object parts, which presents considerable challenges for state-of-the-art algorithms. In this way, the explanatory graph explains neural activations with clear meanings and ignores noisy activations and activations of textural patterns. Given a testing image to the CNN, the explanatory graph can infer (i) which nodes (parts) are responsible for neural activations of a filter and (ii) locations of the corresponding parts on the feature map.

• Graph nodes with high transferability: The explanatory graph contains off-the-shelf patterns for object parts. The explanatory graph summarizes chaotic feature maps of conv-layers into object parts, which can be considered as a more concise and meaningful representation of the CNN knowledge, just like a dictionary. The explanatory graph enables us to accurately transfer object-part patterns from conv-layers to other tasks. Because all filters in the CNN are learned using numerous training images, we can consider each graph node as a detector that has been sophisticatedly learned to detect a part among thousands of images.

To prove the above assertions, we learn explanatory graphs for different CNNs (including the VGG-16, residual networks, and the encoder of a VAE-GAN) and analyze the graphs from various perspectives as follows.
Visualization & reconstruction: We visualize part patterns encoded by graph nodes using the following two approaches. First, for each graph node, we select object parts that most strongly activate the node for visualization. Second, we learn another decoder network to invert activation states of graph nodes to reconstruct image regions of the nodes.
Examining part interpretability of graph nodes: We quantitatively evaluate the part-level interpretability of graph nodes. Given an explanatory graph, we measure whether a node consistently represents the same part on different objects.
Examining location instability of graph nodes:

Besides the part interpretability, we also use a new metric, namely location instability, to measure the semantic clarity of each graph node. It is assumed that if a graph node consistently represents the same object part, then the distance between the inferred part and some ground-truth landmarks of the object should not change a lot through different images. Thus, the evaluation metric uses the deviation of such relative distances over images to measure the instability of a part pattern.


Testing transferability: The transferability of graph nodes are tested in the scenario of few-shot part localization. We associate certain graph nodes with explicit part names based on feature maps of very few images, in order to localize the target part. The superior localization performance proves the good transferability of graph nodes.

Contributions of this paper are summarized as follows.

  • In this paper, we, for the first time, propose a simple yet effective method to extract and summarize part knowledge hidden inside chaotic feature maps of intermediate conv-layers of a CNN and organize the layerwise knowledge hierarchy using an explanatory graph. Experiments show that each graph node consistently represents the same object part through different input images.

  • As a generic method, we can learn explanatory graphs for different CNNs, e.g. VGGs, residual networks, and the encoder of a VAE-GAN.

  • Graph nodes (patterns) have good transferability, especially in the task of few-shot part localization. Although our graph nodes were learned without part annotations, our transfer-learning-based part localization still outperformed approaches that learned part representations using part annotations.

A preliminary version of this paper appeared in [40].

2 Related work

2.1 Semantics in CNNs

The interpretability and the discrimination power are two crucial aspects of a CNN [3]. In recent years, different methods are developed to explore the semantics hidden inside a CNN.

Visualization & interpretability of CNN filters: Visualization of filters in a CNN is the most direct way of exploring the pattern hidden inside a neural unit. Lots of visualization methods have been used in the literature. Dosovitskiy et al. [5] proposed up-convolutional nets to invert feature maps of conv-layers to images. However, up-convolutional nets cannot mathematically ensure the visualization result reflects actual neural representations. Comparatively, gradient-based visualization [39, 19, 27] showed the appearance that maximized the score of a given unit, which is more close to the spirit of understanding CNN knowledge. Furthermore, Bau et al. [3] defined and analyzed the interpretability of each filter. In recent years, [20] provided a reliable tool to visualize filters in different conv-layers of a CNN.

Although these studies achieved clear visualization results, theoretically, gradient-based visualization methods visualize one of the local minimums contained in a high-layer filter. I.e. when a filter represents multiple patterns, these methods selectively illustrated one of the patterns; otherwise, the visualization result will be chaotic. Similarly, [3] selectively analyzed the semantics among the highest 0.5% activations of each filter. In contrast, our method provides a solution to explaining both strong and relatively weak activations from each filter, instead of exclusively extracting significant neural activations.

Active network diagnosis: Going beyond “passive” visualization, some methods “actively” diagnose a pre-trained CNN to obtain insight understanding of CNN representations. Many statistical methods [31, 38, 1] have been proposed to analyze the characteristics of CNN features. [31] explored semantic meanings of convolutional filters. [38] evaluated the transferability of filters in intermediate conv-layers. [17, 1] computed feature distributions of different categories in the CNN feature space. Methods of [6, 24] propagated gradients of feature maps w.r.t.

the CNN loss back to the image, in order to estimate the image regions that directly contribute the network output. The LIME 

[21] and the SHAP [18] proposed general methods extract input units of a neural network that are used for a specific prediction.

Zhang et al. [44] has demonstrated that in spite of the good classification performance, a CNN may encode biased knowledge representations due to dataset bias. Instead, the CNN usually uses unreliable contexts for classification. For example, a CNN may extract features from hairs as a context to identify the smiling attribute.

Therefore, in order to ensure the correctness of feature representations, network-attack methods [30, 11, 31] diagnosed network representations by computing adversarial samples for a CNN. In particular, influence functions [11] were proposed to compute adversarial samples, provide plausible ways to create training samples to attack the learning of CNNs, fix the training set, and further debug representations of a CNN. [13] discovered knowledge blind spots (unknown patterns) of a pre-trained CNN in a weakly-supervised manner. Some studies [36, 37, 35] mined the local, bottom-up, and top-down information components in a model to construct a hierarchical object representation. From this perspective, our method disentangles object-part patterns from a pre-trained CNN and builds a knowledge hierarchy to diagnose the knowledge inside the CNN.

Pattern retrieval:

Some studies retrieve units with specific meanings from CNNs for different applications. Like middle-level feature extraction 

[29], pattern retrieval mainly learns mid-level representations of CNN knowledge. Zhou et al. [48, 49] selected units from feature maps to describe “scenes”. In particular, [48] proposed a method to accurately compute the image-resolution receptive field of neural activations in a feature map. Theoretically, the actual receptive field of a neural activation is smaller than that computed using the filter size. The accurate estimation of the receptive field is crucial to understand a filter’s representations. Simon et al. discovered objects from feature maps of unlabeled images [25], and selected a filter to describe each part in a supervised fashion [26]. However, most methods simply assumed that each filter mainly encoded a single visual concept, and ignored the case that a filter in high conv-layers encoded a mixture of patterns. [41, 42, 43]

extracted certain neurons from a filter’s feature map to describe an object part in a weakly-supervised manner (

e.g. learning from active question answering and human interactions).

Fig. 2: Schematic illustration of the explanatory graph. The explanatory graph encodes spatial and co-activation relationships between part patterns in the explanatory graph. High-layer patterns filter out noises and disentangle low-layer patterns. From another perspective, we can regard low-layer patterns as components of high-layer patterns.

In this study, the explanatory graph disentangles patterns different parts in the CNN without a need of part annotations. Compared to raw feature maps, patterns in graph nodes are more interpretable.

CNN semanticization: Compared to the diagnosis of CNN representations and the pattern retrieval, semanticization of CNN representations is closer to the spirit of building interpretable representations.

Hu et al. [10] designed logic rules for network outputs, and used these rules to regularize neural networks and learn meaningful representations. However, this study has not obtained semantic representations in intermediate layers. [33] distilled knowledge of a neural network into an additive model to explain the knowledge inside the network. [47] used a tree structure to summarize the inaccurate rationale of each CNN prediction into generic decision-making models for a number of samples. Capsule nets [23] and interpretable CNNs [46]

used certain network structures and loss functions, respectively, to make the network automatically encode interpretable features in intermediate layers.

In comparison, we aim to explore the entire semantic hierarchy hidden inside conv-layers of a CNN. With clear semantic structures, the explanatory graph makes it easier to transfer CNN patterns to other part-based tasks.

2.2 Weakly-supervised knowledge transferring

Knowledge transferring ideas have been widely used in deep learning. Typical research includes end-to-end fine-tuning and transferring CNN knowledge between different categories [38] or different datasets [7]. In contrast, a transparent representation of part knowledge will create a new possibility of transferring part knowledge to other applications. Therefore, we build an explanatory graph to represent part patterns hidden inside a CNN, which enables transfer part patterns to other tasks. Experiments have demonstrated the efficiency of our method in few-shot part localization.

3 Algorithm

A single filter is usually activated by different parts of the object (see Fig. 2). Let us assume that given an input image, a filter is activated by parts, i.e. there are activation peaks on the filter’s feature map. Some peaks represent common parts of the object, which are termed part patterns. Other activation peaks may correspond to background noises or textural patterns.

Our goal is to disentangle activation peaks corresponding to part patterns from chaotic feature maps of a filter. It is assumed that if an activation peak of a filter represents an object part, then the CNN usually also contains other filters to represent neighboring parts of the target part. I.e. some activation peaks (patterns) of these filters must keep certain spatial relationships with the target part. Thus, the explanatory graph connects each pattern (node) in a low layer to some patterns in the neighboring upper layer.

We mine part patterns layer by layer. Given patterns mined from the upper layer, we extract activation peaks that keep stable spatial relationships with specific upper-layer patterns through different images, as part patterns in the current layer.

Patterns in high layers usually represent large-scale object parts, while patterns in low layers mainly describe small and relatively simple shapes, which can be regarded as components of high-layer patterns. Patterns in high layers are usually discriminative, and the explanatory graph uses high-layer patterns to filter out noisy activations. Patterns in low layers are disentangled based on their spatial relationship with high-layer patterns.

3.1 Learning

We are given a CNN, which is pre-trained using its own set of training samples . Let denote the target explanatory graph. contains several layers corresponding to conv-layers in the CNN. Our method disentangles the -th filter of the -th conv-layer into part patterns. These part patterns are modeled as a set of nodes in the -th layer of , denoted by . is given as the entire node set for the -th layer. represents parameters of nodes in the -th layer, which mainly encode spatial relationships between these nodes and nodes in the -th layer.

Given an input image , the -th conv-layer of the CNN generates a feature map11footnotemark: 1, denoted by . Then, for each node , the explanatory graph infers whether or not ’s part pattern appears on the -th channel11footnotemark: 1 of , as well as the part location (if the pattern appears). We use to represent position inference results for all nodes in the -th layer.

Fig. 3: Schematic illustration of related patterns and . The related patterns keep similar spatial relationships among different images. Circle centers represent the prior pattern positions, e.g. and . Red arrows denote relative displacements between the inferred positions and prior positions, e.g. .

Top-down iterative learning of explanatory graphs: Given all training images , we expect that (i) all patterns nodes in the explanatory graph can be well fit to feature maps of all images, and (ii) nodes in the lower layer always keep consistent with nodes in the upper layer given each input images. Therefore, the learning of an explanatory graph is conducted in a top-down manner as follows.

We first disentangle patterns from the top conv-layer of the CNN and construct the top graph layer. Then, we use inference results of the patterns/nodes on the top layer to help disentangle patterns from the neighboring lower conv-layer. In this way, we can ensure stable layerwise spatial relationships between patterns.

When we learn the -th layer, for each node , we need to learn the following two terms: (i) the parameter and (ii) a set of patterns in the upper layer that are connected to , . denotes the prior location of . Thus, for each node , corresponds the prior displacement between and the upper node . The explanatory graph only uses the displacement to model the spatial relationships between nodes.

Just like an EM algorithm, we use the current explanatory graph to fit feature maps of training images. Then, we use matching results as feedback to modify the prior location and edge connections of each node in the -th layer, in order to make the explanatory graph better fit the feature maps. We repeat this process iteratively to obtain the optimal prior location and edge connections for .

In other words, our method automatically extracts pairs of related patterns and learns the optimal spatial relationships between them during the iterative learning process, which best fit feature maps of training images.

Therefore, the objective function of learning the -th layer is given as

(1)

Let us focus on the feature map of image . Without ambiguity, we ignore the superscript to simplify notations in following paragraphs. We can regard as a distribution of “neural activation entities.” The neural response of each unit can be considered as the number of “activation entities.” In other words, each neural unit localizes at the position of 222To make unit positions in different conv-layers comparable with each other (e.g. in Eq. 4), we project the position of unit to the image plane. We define the coordinate on the image plane, instead of on the feature-map plane. in the -th channel of . We use to denote the number of activation entities at the location , where is the normalized response value of ; is a constant.

Just like a Gaussian mixture model, all patterns in

comprise a mixture model, which explains the distribution of activation entities on the -th channel of . Each node is treated as a hidden variable or an alternative component in the mixture model to describe activation entities.

(2)

where

is a constant prior probability.

measures the compatibility of using node to describe an activation entity at . In particular, we add a dummy component to the mixture model for noisy activations, which cannot be explained by any part patterns. The compatibility between and is based on spatial relationship between and its connected nodes in , which is approximated as

(3)
(4)

In above equations, has related nodes in the upper layer. The set of node connections would be determined during the learning process. The overall compatibility is divided into the spatial compatibility between node and each related node , . , denotes the position inference result of , which have been given. is a constant for normalization. is a constant to roughly ensure , which can be eliminated during the learning process.

As shown in Fig. 3, an intuitive idea is that the relative displacement between and should not change a lot among different images. Then, will approximate to the prior displacement , if node can well fit the activation at . Given , we assume the spatial relationship between and

follows a Gaussian distribution in Eqn. 

4, where we define as the prior localization of given . The variation can be estimated from data333We can prove that for each , , where ; . Therefore, we can either directly use as , or compute the variation of w.r.t. different images to obtain ..

  Inputs: feature map of the -th conv-layer, inference results in the upper conv-layer.
  Outputs: for .
  Initialization: , , a random value for
  for  to  do
     , compute .
     for  do
        Update via an EM algorithm, .
        Select patterns from to construct based on a greedy strategy, which maximize .
     end for
  end for
Algorithm 1 Learning sub-graph in the -th layer

We learn the explanatory graph in a top-down manner, and the learning process is summarized in Algorithm 1. We first learn nodes in the top-layer of , and then learn for the neighboring lower layer. For the sub-graph in the -th layer, we iteratively estimate and for nodes in the sub-graph.

Note that for each pattern in the top conv-layer, we simply define , , where is a node in the dummy layer above the top conv-layer. Based on Eqns. (3) and (4), we obtain .

Inference of pattern locations: Given feature maps of an input image, we can assign nodes in the explanatory graph with different activations peaks on feature maps, in order to infer semantic meanings (parts) represented by these neural activations. The explanatory graph simply assigns node with the unit on the feature map as the inference of , where denotes the score of assigning to . Accordingly, represents the inferred location of . In particular, in Eqn. (1), we define .

4 Experiments

To demonstrate the broad applicability of our method, we learned explanatory graphs to interpret four types of CNNs, i.e. the VGG-16 [28], the 50-layer and 152-layer Residual Networks [9], and the encoder of the VAE-GAN [14]. These CNNs learned using a total of 37 animal categories in three datasets, which included the ILSVRC 2013 DET Animal-Part dataset [41], the CUB200-2011 dataset [34], and the VOC Part dataset [4]. As discussed in [4, 41], animals usually contain non-rigid parts, which presents a key challenge for part localization. Thus, we selected animal categories in the three datasets for testing.

We designed three experiments to evaluate the explanatory graph from different perspectives. In the first experiment, we visualized node patterns in the explanatory graph. The second experiment was designed to evaluate the interpretability of part patterns, i.e. checking whether or not a node pattern consistently represents the same object part among different images. We compared our patterns with three types of middle-level features and neural patterns. In the third experiment, we used our graph nodes for the task of few-shot part localization, in order to test the transferability of node patterns in the graph. We associated part patterns with explicit part names for part localization. We compared our part-localization performance with fourteen baselines.

4.1 Implementation details

Fig. 4: A four-layer explanatory graph. For clarity, we put all nodes of different filters in the same conv-layer on the same plan and only show 1% of the nodes with 10% of their edges from two perspectives.
Fig. 5: Image patches corresponding to different nodes in explanatory graphs.
Fig. 6: Heatmaps of patterns. We use a heatmap to visualize the spatial distribution of the top-50% patterns in the -th layer of the explanatory graph with the highest inference scores. We also compare heatmaps with the grad-CAM [24] of the feature map. Unlike the grad-CAM, our heatmaps mainly focus on the foreground of an object and uniformly pay attention to all parts, rather than only focus on most discriminative parts.

We first trained/fine-tuned a CNN using object images of a category, which were cropped using object bounding boxes. Then, we set parameters , , , and to learn an explanatory graph for the CNN.
VGG-16:

The VGG-16 was first pre-trained using the 1.3M images in the ImageNet dataset 

[22]. We then fine-tuned all conv-layers of the VGG-16 using object images in a category. The loss for fine-tuning was for binary classification between the target category and background images. The VGG-16 has thirteen conv-layers and three fully connected layers. We selected the ninth, tenth, twelfth, and thirteenth conv-layers of the VGG-16 as four valid conv-layers, and accordingly, we built a four-layer graph. We extracted patterns from the -th filter of the -th layer, where we set and .
Residual Networks: Two residual networks, i.e. the 50-layer and 152-layer ones, were used in experiments. The fine-tuning process for each network was exactly the same as that for VGG-16. We built a three-layer graph based on each residual network by selecting the last conv-layer with a feature output, the last conv-layer with a feature map, and the last conv-layer with a feature map as valid conv-layers. We set , , and .
VAE-GAN: For each category, we used the cropped object images to train a VAE-GAN. We learned a three-layer graph based on all three conv-layers of the encoder of the VAE-GAN. We set , , and .

4.2 Experiment 1: pattern visualization

The global structure of an explanatory graph for a VGG-16 network is visualized in Fig. 4. We visualized detailed part patterns of graph nodes from the following three perspectives.

Fig. 7: Image synthesis result based on patterns activated on an image. The explanatory graph only encodes major part patterns hidden in conv-layers, rather than compress a CNN without information loss. Synthesis results demonstrate that the patterns are automatically learned to represent foreground appearance, and ignore background noises and trivial details of objects.

Top-ranked patches: For each image , we performed the pattern inference on its feature maps. For a node , we extracted a patch at the location of 444We projected the unit to the image to compute its position. with a fixed scale of to represent . Fig. 5 shows a pattern’s image patches that had highest inference scores.

Heatmaps of patterns: Given inference results of patterns w.r.t. a cropped object image , we drew heatmaps to show the spatial distribution of the inferred patterns. We drew a heatmap for each layer of the graph. Each pattern was visualized as a weighted Gaussian distribution 44footnotemark: 4 on the heatmap, where . Fig. 6 shows heatmaps of the top-50% patterns with the highest scores of .

Pattern-based image synthesis: We used the up-convolutional network [5] to visualize part patterns of graph nodes. Given an object image , we used the explanatory graph for pattern inference, i.e. assigning each pattern with a certain neural unit as its position inference44footnotemark: 4. We considered the top-10% patterns with highest scores of as valid ones. We filtered out all neural responses of units, which were not assigned to valid patterns, from feature maps (setting these responses to zero). We selected the filtered feature map corresponding to the second graph layer and used the up-convolutional network to synthesize the filtered feature map to the input image. Fig. 7 shows image-synthesis results, which can be regarded as the visualization of the inferred patterns.

Fig. 8: Purity of part semantics (top-left). We compared patterns corresponding to nodes in the explanatory graph with patterns of raw filters. We draw raw feature maps of filters (left), the highest activation peaks on feature maps of filters (middle), and image regions corresponding to each node in the explanatory graph (right). Based on such visualization results, we use human users to annotate the semantic purity of each node/filter.

4.3 Experiment 2: semantic interpretability of patterns

In this experiment, we evaluated whether or not each node pattern consistently represented the same object part through different images. Four explanatory graphs were built for a VGG-16 network, two residual networks, and a VAE-GAN. These networks were learned using the CUB200-2011 dataset [34]. We used the following two metrics to measure the interpretability of node patterns.

Part interpretability of patterns: We mainly extracted patterns from high conv-layers, because as discussed in [3], high conv-layers contain large-scale part patterns. The evaluation metric was inspired by Zhou et al. [48]. For the pattern of a given node , we used to make inferences among all images. We regarded inference results with the top- inference scores among all images as valid representations of . We require the highest inference scores on images to take about 30% of the inference energy, i.e. we use to compute . We asked human raters to count the number of inference results, which described the same object part, among the top , in order to compute the purity of part semantics of pattern .

The table in Fig. 8(top-left) shows the semantic purity of the patterns in the second layer of the graph. Let the second graph layer correspond to the -th conv-layer with filters. The raw filter maps baseline used all neural activation in the feature map of a filter to describe a part. The raw filter peaks baseline considered the highest peak on a filer’s feature map as the part detection. Like our method, the two baselines also visualized top- part inferences (the feature maps’ neural activations took 30% of activation energies over all images). We back-propagated the center of the receptive field of each neural activation to the image plane and draw the image region corresponding to each neural activation. Fig. 8 compares the image region corresponding to each graph node and image regions corresponding to feature maps of each filter. Our graph nodes represented explicit object parts, but raw filters encoded mixed semantics.

Fig. 9: Notation for the computation of location instability.

Because the baselines simply averaged the semantic purity among the filters, we also computed average semantic purities using the top- nodes with the highest scores of to enable a fair comparison.

ResNet-50 ResNet-152 VGG-16 VAE-GAN
Raw filter [48] 0.1328 0.1346 0.1398 0.1944
Ours 0.0848 0.0858 0.0638 0.1066
[29] 0.1341
[26] 0.2291
TABLE I: Location instability of patterns.
Fig. 10: Schematic illustration of an And-Or graph for semantic object parts. The AOG encodes a four-layer hierarchy for each semantic part, i.e. the semantic part (OR node), part templates (AND node), latent part patterns (OR nodes, those from the explanatory graph), and neural units (terminal nodes). In the AOG, the OR node of semantic part contains a number of alternative appearance candidates as children. Each OR node of a latent part pattern encodes a list of neural units as alternative deformation candidates. Each AND node (e.g. a part template) uses a number of latent part patterns to describe its compositional regions.

Location instability of inference positions: We defined the location instability for each pattern as another evaluation metric of pattern interpretability. We assumed that if a pattern was always triggered by the same object part through different images, then the distance between the pattern’s inference position and a ground-truth landmark of the object part should not change a lot among various images.

As shown in Fig. 9, given a testing image , , , and denote the distances between the inferred position of and ground-truth landmark positions of head, back, and tail parts, respectively. These distances were normalized by the diagonal length of input images. Then, the node’s location instability was given as , where denotes the variation of over different images.

We compared its location instability of an explanatory graph with three baselines. The first baseline treated each filter in a CNN as a detector of a certain pattern. Thus, given the feature map of a filter (after the ReLu operation), we used the method of

[48] to localize the unit with the highest response value as the pattern position. The other two baselines were typical methods to extract middle-level features from images [29] and extract patterns from CNNs [26], respectively. For each baseline, we chose the top-500 patterns, i.e. 500 nodes with top scores in the explanatory graph, 500 filters with strongest activations in the CNN, and the top-500 middle-level features. For each pattern, we selected position inferences on the top-20 images with highest scores to compute the location instability. Table I compares the location instability of different baselines. Nodes in the explanatory graph had significantly lower location instability than patterns of baselines.

4.4 Experiment 3: few-shot part localization

Methodobj.-box fine-tune
SS-DPM-Part [2] N 0.3469
PL-DPM-Part [16] N 0.3412
Part-Graph [4] N 0.4889
CNN-PDD [26] N 0.2333
CNN-PDD-ft [26] Y 0.3269
Ours Y 0.0862
fc7+linearSVM Y 0.3120
fc7+sp+linearSVM Y 0.3120
Fast-RCNN (1 ft) [8] N 0.4517
Fast-RCNN (2 fts) [8] Y 0.4131
TABLE II: Normalized distance of part localization on the CUB200-2011 dataset [34]. The second column indicates whether the baseline used all object-box annotations in the category to fine-tune a CNN.
obj.-box fine-tune bird cat cow dog horse sheep Avg.
SS-DPM-Part [2] N 0.356 0.270 0.264 0.242 0.262 0.286 0.280
PL-DPM-Part [16] N 0.294 0.328 0.282 0.312 0.321 0.840 0.396
Part-Graph [4] N 0.360 0.208 0.263 0.205 0.386 0.500 0.320
CNN-PDD [26] N 0.301 0.246 0.220 0.248 0.292 0.254 0.260
CNN-PDD-ft [26] Y 0.358 0.268 0.220 0.200 0.302 0.269 0.269
Ours Y 0.162 0.130 0.258 0.137 0.181 0.192 0.177
fc7+linearSVM Y 0.247 0.174 0.251 0.217 0.261 0.317 0.244
fc7+sp+linearSVM Y 0.247 0.174 0.249 0.217 0.261 0.317 0.244
Fast-RCNN (1 ft) [8] N 0.324 0.324 0.325 0.272 0.347 0.314 0.318
Fast-RCNN (2 fts) [8] Y 0.350 0.295 0.255 0.293 0.367 0.260 0.303
TABLE III: Normalized distance of part localization on the VOC Part dataset [4]. The second column indicates whether the baseline used all object-box annotations in the category to fine-tune a CNN.
obj.-box fine-tune bird cat cow dog horse sheep Avg.
SS-DPM-Part [2] N 0.0 1.3 1.6 1.9 1.1 3.3 1.5
PL-DPM-Part [16] N 0.5 1.1 4.4 0.4 0.0 0.0 1.1
Part-Graph [4] N 2.9 22.6 12.1 11.0 3.2 0.0 8.6
Ours Y 20.2 34.9 8.2 33.8 10.0 14.5 20.3
fc7+linearSVM Y 8.0 27.6 7.1 10.4 16.1 6.2 12.6
fc7+sp+linearSVM Y 8.0 27.6 7.1 10.4 16.1 6.2 12.6
fc7+RBF-SVM Y 5.3 26.0 7.7 8.9 14.7 8.3 11.8
fc7+sp+RBF-SVM Y 5.0 26.3 7.1 8.8 15.1 8.7 11.8
fc7+NN Y 1.9 21.0 3.8 4.7 3.6 5.0 6.7
fc7+sp+NN Y 1.9 21.0 3.8 4.7 3.6 5.0 6.7
Fast-RCNN (1 ft) [8] N 2.1 2.2 2.2 1.9 1.4 7.0 2.8
Fast-RCNN (2 fts) [8] Y 7.7 24.0 18.7 18.0 5.0 19.4 15.5
TABLE IV: Accuracy of part localization evaluated by “” on the Pascal VOC Part dataset [4]. The second column indicates whether the baseline used all object annotations in the category to pre-finetune a CNN before learning the part.
obj.-box fine-tune gold. bird frog turt. liza. koala lobs. dog fox cat lion
SS-DPM-Part N 0.297 0.280 0.257 0.255 0.317 0.222 0.207 0.239 0.305 0.308 0.238
PL-DPM-Part N 0.273 0.256 0.271 0.321 0.327 0.242 0.194 0.238 0.619 0.215 0.239
Part-Graph N 0.363 0.316 0.241 0.322 0.419 0.205 0.218 0.218 0.343 0.242 0.162
CNN-PDD N 0.316 0.289 0.229 0.260 0.335 0.163 0.190 0.220 0.212 0.196 0.174
CNN-PDD-ft Y 0.302 0.236 0.261 0.231 0.350 0.168 0.170 0.177 0.264 0.270 0.206
Ours Y 0.090 0.091 0.095 0.167 0.124 0.084 0.155 0.147 0.081 0.129 0.074
fc7+linearSVM Y 0.150 0.318 0.186 0.150 0.257 0.156 0.196 0.136 0.101 0.138 0.132
fc7+sp+linearSVM Y 0.150 0.318 0.186 0.150 0.254 0.156 0.196 0.136 0.101 0.138 0.132
Fast-RCNN (1 ft) N 0.261 0.365 0.265 0.310 0.353 0.365 0.289 0.363 0.255 0.319 0.251
Fast-RCNN (2 fts) Y 0.340 0.351 0.388 0.327 0.411 0.119 0.330 0.368 0.206 0.170 0.144
tiger bear rabb. hams. squi. horse zebra swine hippo catt. sheep
SS-DPM-Part N 0.144 0.260 0.272 0.178 0.261 0.246 0.206 0.240 0.234 0.246 0.205
PL-DPM-Part N 0.136 0.323 0.228 0.186 0.281 0.322 0.267 0.297 0.273 0.271 0.413
Part-Graph N 0.127 0.224 0.188 0.131 0.208 0.296 0.315 0.306 0.378 0.333 0.230
CNN-PDD N 0.160 0.223 0.266 0.156 0.291 0.261 0.266 0.189 0.192 0.201 0.244
CNN-PDD-ft Y 0.256 0.178 0.167 0.286 0.237 0.310 0.321 0.216 0.257 0.220 0.179
Ours Y 0.102 0.121 0.087 0.097 0.095 0.189 0.212 0.212 0.151 0.185 0.124
fc7+linearSVM Y 0.163 0.122 0.139 0.110 0.262 0.205 0.258 0.201 0.140 0.256 0.236
fc7+sp+linearSVM Y 0.163 0.122 0.139 0.110 0.262 0.205 0.258 0.201 0.140 0.256 0.236
Fast-RCNN (1 ft) N 0.260 0.317 0.255 0.255 0.169 0.374 0.322 0.285 0.265 0.320 0.277
Fast-RCNN (2 fts) Y 0.160 0.230 0.230 0.178 0.205 0.346 0.303 0.212 0.223 0.228 0.195
ante. camel otter arma. monk. elep. red pa. gia.pa. Avg.
SS-DPM-Part N 0.224 0.277 0.253 0.283 0.206 0.219 0.256 0.129 0.242
PL-DPM-Part N 0.337 0.261 0.286 0.295 0.187 0.264 0.204 0.505 0.284
Part-Graph N 0.216 0.317 0.227 0.341 0.159 0.294 0.276 0.094 0.257
CNN-PDD N 0.208 0.193 0.174 0.299 0.236 0.214 0.222 0.179 0.225
CNN-PDD-ft Y 0.229 0.253 0.198 0.308 0.273 0.189 0.208 0.275 0.240
Ours Y 0.093 0.120 0.102 0.188 0.086 0.174 0.104 0.073 0.125
fc7+linearSVM Y 0.164 0.190 0.140 0.252 0.256 0.176 0.215 0.116 0.184
fc7+sp+linearSVM Y 0.164 0.190 0.140 0.250 0.256 0.176 0.215 0.116 0.184
Fast-RCNN (1 ft) N 0.255 0.351 0.340 0.324 0.334 0.256 0.336 0.274 0.299
Fast-RCNN (2 fts) Y 0.175 0.247 0.280 0.319 0.193 0.125 0.213 0.160 0.246
TABLE V: Normalized distance of part localization on the ILSVRC 2013 DET Animal-Part dataset [41]. The second column indicates whether the baseline used all object-box annotations in the category to fine-tune a CNN.
obj.-box finetune gold. bird frog turt. liza. koala lobs. dog fox cat lion tiger bear rabb. hams. squi.
SS-DPM-Part [2] N 1.5 0.0 1.2 2.6 0.7 8.8 1.4 5.2 0.0 10.9 13.4 20.4 7.0 0.5 6.5 0.5
PL-DPM-Part [16] N 0.0 1.0 0.0 0.6 0.0 3.3 0.0 3.3 0.0 23.8 8.8 3.6 0.0 1.6 22.3 0.0
Part-Graph [4] N 2.0 5.5 5.9 6.5 7.4 12.1 3.5 9.0 1.9 18.7 40.7 56.1 15.0 27.3 37.7 21.4
fc7+linearSVM Y 20.0 2.0 13.5 20.8 7.4 30.2 1.4 27.5 55.9 39.4 43.3 27.0 46.5 44.3 60.5 8.8
fc7+RBF-SVM Y 4.5 0.0 2.4 24.7 5.9 34.0 0.7 15.6 29.9 42.5 53.1 39.3 19.0 44.8 41.4 0.9
fc7+NN Y 1.0 0.0 1.2 7.1 2.2 28.4 1.4 5.2 19.4 20.2 52.1 39.8 5.0 17.5 32.6 0.5
fc7+sp+linearSVM Y 20.0 2.0 13.5 20.8 7.4 30.2 1.4 27.5 55.9 39.4 43.3 27.0 46.5 44.3 60.5 8.8
fc7+sp+RBF-SVM Y 4.5 0.0 1.8 24.7 4.4 34.4 0.7 14.7 29.9 41.5 53.1 38.8 19.0 44.3 41.9 0.9
fc7+sp+NN Y 1.0 0.0 1.2 7.1 2.2 28.4 1.4 5.2 19.4 20.2 52.1 39.8 5.0 17.5 32.6 0.5
Fast-RCNN (1 ft) [8] N 5.0 0.5 1.8 2.6 3.7 3.3 0 0.5 28.9 11.4 22.2 11.7 2.5 20.2 27.9 36.3
Fast-RCNN (2 fts) [8] Y 4.5 5.0 2.4 4.5 2.2 68.8 1.4 9.0 46.0 50.8 61.3 65.8 29.0 30.1 56.3 40.9
Ours Y 33.0 40.3 48.8 18.2 21.4 61.9 3.5 30.3 62.1 26.4 61.9 49.5 36.0 65.6 64.7 25.6
horse zebra swine hippo catt. sheep ante. camel otter arma. monk. elep. red pa. gia.pa. Avg.
SS-DPM-Part [2] N 9.5 1.1 0.6 1.1 7.0 14.7 12.4 0.9 0.5 4.5 12.4 11.8 2.2 49.1 7.0
PL-DPM-Part [16] N 5.8 0.0 0.6 0.5 0.5 0.0 0.0 0.0 0.0 0.0 9.1 2.6 28.1 0.0 3.9
Part-Graph [4] N 10.0 13.0 4.9 4.3 7.0 19.0 23.0 5.6 18.2 6.6 18.3 2.6 16.2 58.6 15.9
fc7+linearSVM Y 16.3 10.7 22.0 31.9 4.9 20.2 26.3 23.7 35.3 11.6 12.4 36.8 22.8 48.6 25.7
fc7+RBF-SVM Y 7.9 27.1 7.3 14.4 2.7 14.1 25.3 16.3 37.4 13.6 10.8 22.4 26.8 54.5 21.3
fc7+NN Y 2.1 22.6 1.2 1.1 2.2 6.1 2.3 8.8 40.6 10.6 7.0 5.3 21.1 55.9 14.0
fc7+sp+linearSVM Y 16.3 10.7 22.0 31.9 4.9 20.2 26.3 23.7 35.3 12.1 12.4 36.8 22.4 48.6 25.7
fc7+sp+RBF-SVM Y 7.9 27.1 7.3 14.4 2.7 14.1 19.4 16.3 37.4 13.6 9.1 22.4 27.6 55.0 21.0
fc7+sp+NN Y 2.1 22.6 1.2 1.1 2.2 6.1 2.3 8.8 40.6 10.6 7.0 5.3 21.1 55.9 14.0
Fast-RCNN (1 ft) [8] N 3.2 6.8 11.0 11.2 1.6 7.4 23.0 1.9 2.1 2.5 3.8 11.8 14.5 19.5 10.0
Fast-RCNN (2 fts) [8] Y 6.3 15.3 39.0 34.6 36.2 43.6 46.5 20.5 26.7 13.1 36.6 56.6 47.8 57.3 31.9
Ours Y 37.9 35.6 15.2 41.0 27.6 39.9 53.5 15.8 20.9 28.3 55.4 32.9 51.8 67.3 39.1
TABLE VI: Accuracy of part localization evaluated by “” on the ILSVRC 2013 DET Animal-Part dataset [41]. The second column indicates whether the baseline used all object annotations in the category to pre-finetune a CNN before learning the part.

4.4.1 Hybrid And-Or graph for semantic parts

The explanatory graph makes it plausible to transfer middle-layer patterns from CNNs to semantic object parts. In order to test the transferability of patterns in the explanatory graph, we introduce a further extension of the disentangling graph, i.e. using a hybrid And-Or graph (AOG) to associate part patterns in the explanatory graph with explicit part names. The structure of the AOG is inspired by [45], and the learning of the AOG was originally proposed in [41]. We briefly introduce basic inference logic and settings of the AOG as follows.

As shown in Fig. 10, the AOG encodes a four-layer hierarchy for each semantic part, i.e. the semantic part (OR node), part templates (AND node), latent patterns (OR nodes, those from the explanatory graph), and neural units (terminal nodes).

Layer Name Node type Notation
1 semantic part OR node
2 part template AND node
3 latent pattern OR node
4 neural unit Terminal node

where latent patterns correspond to nodes from the explanatory graph.

In the AOG, each OR node (e.g. a semantic part or a latent pattern) contains a list of alternative appearance (or deformation) candidates. Each AND node (e.g. a part template) uses a number of latent patterns to describe its compositional regions.

  • The OR node of a semantic part contains a total of part templates to represent alternative appearance or pose candidates of the part.

  • Each part template (AND node) retrieve patterns from the explanatory graph as children. These patterns describe compositional regions of the part.

  • Each latent pattern (OR node) has all units in its corresponding filter’s feature map as children, which represent its deformation candidates on image .

Technical details: Based on the AOG, we use the extracted patterns to infer semantic parts in a bottom-up manner. We first compute inference scores of different units at the bottom layer w.r.t. different patterns, and then we propagate inference scores up to the layers of part templates and the semantic part for part localization.

The top OR node of the semantic part contains a total of part templates to represent alternative appearance or pose candidates of the part. We manually define the composition of the part templates. During part-inference process, given an image , selects its best child as the true part template:

(5)

where denotes the inference score of .

Then, each part template uses a number of latent patterns to describe sub-regions of the part. In the scenario of one-shot learning, we only annotate one part sample belonging to the part template. Then, we retrieve patterns that are related to the annotated part from all nodes in the disentangling graph. Given the inference score and inferred position of each latent pattern on , we retrieve the top latent patterns with the highest scores of as children of . denotes the annotated position of the part ; is a constant variation.

When we have extracted a set of latent patterns for a part template, given a new image, we can use inference results of the latent patterns to localize the part template:

(6)

where denotes a constant displacement from to .

Each latent pattern has a channel of units as children, which represent its deformation candidates on image . The score of each unit is given as . The OR node of selects the unit with the maximum score as its deformation configuration:

(7)

Please see [41] for details of the AOG.

4.4.2 Experimental settings of three-shot learning

Given a fine-tuned VGG-16 network, we learned an explanatory graph and built the AOG upon the explanatory graph following the scenario of few-shot learning in [41]. For each category, we set three templates for the head part () and used three part-box annotations for the three templates. Note that we used object images without part annotations to learn the explanatory graph, and we used three part annotations provided by [41] for each part to build the AOG. All these object-box annotations and part annotations were equally provided to all baselines to enable fair comparisons (besides part annotations, all baselines also used object annotations contained in the datasets for learning). We set to learn AOGs for categories in the ILSVRC Animal-Part and CUB200 datasets and set for VOC Part categories. Then, we used the AOGs to localize semantic parts on objects.

Baselines: We compared AOGs with a total of fourteen baselines for part localization. The baselines included (i) approaches for object detection (i.e. directly detecting target parts from objects), (ii) graphical/part models for part localization, and (iii) the methods selecting CNN patterns to describe object parts.

The first baseline was the standard fast-RCNN [8], namely Fast-RCNN (1 ft), which directly fine-tuned a VGG-16 network based on part annotations. Then, the second baseline, namely Fast-RCNN (2 fts), first used massive object-box annotations in the target category to fine-tune the VGG-16 network with the loss of object detection. Then, given part annotations, Fast-RCNN (2 fts) further fine-tuned the VGG-16 to detect object parts. We used [26] as the third baseline, namely CNN-PDD. CNN-PDD selected certain filters of a CNN to localize the target part. In CNN-PDD, the CNN was pre-trained using the ImageNet dataset [22]. Just like Fast-RCNN (2 ft), we extended [26] as the fourth baseline CNN-PDD-ft, which fine-tuned a VGG-16 network using object-box annotations before applying the technique of [26]. The fifth and sixth baselines were DPM-related methods, i.e. the strongly supervised DPM (SS-DPM-Part[2] and the technique in [16] (PL-DPM-Part), respectively. Then, the seventh baseline, namely Part-Graph, used a graphical model for part localization [4]

. For weakly supervised learning, “simple” methods are usually insensitive to model over-fitting. Thus, we designed six baselines as follows. First, we used object-box annotations in a category to fine-tune the VGG-16 network. Then, given a few well-cropped object images, we used the selective search 

[32] to collect image patches, and used the VGG-16 network to extract fc7 features from these patches. The baselines fc7+linearSVM, fc7+RBF-SVM, fc7+NN used a linear SVM, an RBF-SVM, and the nearest-neighbor method (selecting the patch closest to the annotated part), respectively, to detect the target part. The other three baseline fc7+sp+linearSVM, fc7+sp+RBF-SVM, fc7+sp+NN combined both the fc7 feature and the spatial position () of each image patch as features for part detection. The last competing method is weakly supervised mining of part patterns from CNNs [41], namely supervised-AOG. Unlike our method (unsupervised), supervised-AOG used part annotations to extract part patterns.

Dataset ILSVRC DET VOC CUB200
Animal Part -2011
Supervised-AOG 0.1344 0.1767 0.0915
Ours (unsupervised) 0.1250 0.1765 0.0862
TABLE VII: Normalized distance of part localization. We compared supervised and unsupervised mining of part patterns.

Comparisons: We divided all baselines into three groups. The first group, namely not-learn parts

, included traditional methods without using deep features, such as SS-DPM-Part, PL-DPM-Part, and Part-Graph. These methods did not learn deep features

555Representation learning in these methods only used object-box annotations, which is independent to part annotations. A few part annotations were used to select off-the-shelf pre-trained features.. The second group, termed super-learn parts, contained Fast-RCNN (1 ft), Fast-RCNN (2 ft), CNN-PDD, CNN-PDD-ft, supervised-AOG, fc7+linearSVM, and fc7+sp+linearSVM. These methods learned deep features using part annotations, e.g. fast-RCNN methods used part annotations to learn features; supervised-AOG used part annotations to select filters from CNNs to localize parts. The third group (unsuper-learn parts) included CNN-PDD, CNN-PDD-ft, and our method. These methods learned deep features using object-level annotations, rather than part annotations.

Fig. 11: Localization results based on AOGs that are learned using three annotations of the head part.

Fig. 11 visualizes localization results based on AOGs, which were learned using three annotations of the head part of each category. We used the normalized distance (used in [41, 26]) and the traditional intersection-over-union (IoU) criterion to evaluate the localization performance. Tables II, III, IV, V, and VI show part-localization results on the CUB200-2011 dataset [34], the VOC Part dataset [4], and the ILSVRC 2013 DET Animal-Part dataset [41]. AOGs based on our graph nodes exhibited outperformed all baselines in few-shot learning. Note that our AOGs simply localized the center of an object part without sophisticatedly modeling the scale of the part. Thus, detection-based methods, which also estimated the part scale, performed better in very few cases. Table VII compares the unsupervised and supervised learning of neural patterns. In the experiment, our method outperformed all baselines, even including approaches that learned part features using part annotations.

5 Conclusion and discussions

In this paper, we have developed a simple yet effective method to learn an explanatory graph that reveals knowledge hierarchy inside conv-layers of a pre-trained CNN. The explanatory graph can be regarded as a concise and meaningful summarization of CNN knowledge in intermediate layers, which filters out noisy activations, disentangles part patterns from each filter, and models co-activation relationships and spatial relationships between part patterns. Experiments showed that our patterns had significantly higher stability than baselines. More crucially, our method can be applied to different types of networks, including the VGG-16, residual networks, and the VAE-GAN, to explain their conv-layers.

The transparent representation of the explanatory graph boosts the transferability of CNN features. Part-localization experiments well demonstrated the good transferability of CNN patterns in graph nodes. Our method even outperformed the supervised learning of part representations. Nevertheless, the explanatory graph is just a rough representation of CNN knowledge. It is still difficult to well disentangle textural patterns from filters of the CNN.

Acknowledgments

This work is supported by ONR MURI project N00014-16-1-2007, DARPA XAI Award N66001-17-2-4029, and NSF IIS 1423305.

References

  • [1] M. Aubry and B. C. Russell. Understanding deep features with computer-generated imagery. In ICCV, 2015.
  • [2] H. Azizpour and I. Laptev. Object detection using strongly-supervised deformable part models. In ECCV, 2012.
  • [3] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network dissection: Quantifying interpretability of deep visual representations. In CVPR, 2017.
  • [4] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In CVPR, 2014.
  • [5] A. Dosovitskiy and T. Brox. Inverting visual representations with convolutional networks. In CVPR, 2016.
  • [6] R. C. Fong and A. Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In ICCV, 2017.
  • [7] Y. Ganin and V. Lempitsky.

    Unsupervised domain adaptation in backpropagation.

    In ICML, 2015.
  • [8] R. Girshick. Fast r-cnn. In ICCV, 2015.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [10] Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. P. Xing. Harnessing deep neural networks with logic rules. In ACL, 2016.
  • [11] P. Koh and P. Liang. Understanding black-box predictions via influence functions. In ICML, 2017.
  • [12] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • [13] H. Lakkaraju, E. Kamar, R. Caruana, and E. Horvitz. Identifying unknown unknowns in the open world: Representations and policies for guided exploration. In AAAI, 2017.
  • [14] A. B. L. Larsen, S. K. Sønderby, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In ICML, 2016.
  • [15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 1998.
  • [16] B. Li, W. Hu, T. Wu, and S.-C. Zhu. Modeling occlusion by discriminative and-or structures. In ICCV, 2013.
  • [17] Y. Lu. Unsupervised learning on neural network outputs with application in zero-shot learning. In IJCAI, 2016.
  • [18] S. M. Lundberg and S.-I. Lee. A unified approach to interpreting model predictions. In NIPS, 2017.
  • [19] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. In CVPR, 2015.
  • [20] C. Olah, A. Mordvintsev, and L. Schubert. Feature visualization. Distill, 2017. https://distill.pub/2017/feature-visualization.
  • [21] M. T. Ribeiro, S. Singh, and C. Guestrin.

    “why should i trust you?” explaining the predictions of any classifier.

    In KDD, 2016.
  • [22] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. In IJCV, 115(3):211–252, 2015.
  • [23] S. Sabour, N. Frosst, and G. E. Hinton. Dynamic routing between capsules. In NIPS, 2017.
  • [24] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, 2017.
  • [25] M. Simon and E. Rodner. Neural activation constellations: Unsupervised part model discovery with convolutional networks. In ICCV, 2015.
  • [26] M. Simon, E. Rodner, and J. Denzler. Part detector discovery in deep convolutional neural networks. In ACCV, 2014.
  • [27] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In arXiv:1312.6034, 2013.
  • [28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [29] S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. In ECCV, 2012.
  • [30] J. Su, D. V. Vargas, and S. Kouichi. One pixel attack for fooling deep neural networks. In arXiv:1710.08864, 2017.
  • [31] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In ICLR, 2014.
  • [32] J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeulders. Selective search for object recognition. In IJCV, 104(2):154–171, 2013.
  • [33] J. Vaughan, A. Sudjianto, E. Brahimi, J. Chen, and V. N. Nair. Explainable neural networks based on additive index models. in arXiv:1806.01933, 2018.
  • [34] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltech-ucsd birds-200-2011 dataset. Technical Report CNS-TR-2011-001, In California Institute of Technology, 2011.
  • [35] T. Wu and S.-C. Zhu. A numerical study of the bottom-up and top-down inference processes in and-or graphs.

    International journal of computer vision

    , 93(2):226–252, 2011.
  • [36] T.-F. Wu, G.-S. Xia, and S.-C. Zhu. Compositional boosting for computing hierarchical image structures. In CVPR, 2007.
  • [37] X. Yang, T. Wu, and S.-C. Zhu. Evaluating information contributions of bottom-up and top-down processes. ICCV, 2009.
  • [38] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In NIPS, 2014.
  • [39] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
  • [40] Q. Zhang, R. Cao, F. Shi, Y. Wu, and S.-C. Zhu. Interpreting cnn knowledge via an explanatory graph. In AAAI, 2018.
  • [41] Q. Zhang, R. Cao, Y. N. Wu, and S.-C. Zhu. Growing interpretable graphs on convnets via multi-shot learning. In AAAI, 2017.
  • [42] Q. Zhang, R. Cao, Y. N. Wu, and S.-C. Zhu. Mining object parts from cnns via active question-answering. In CVPR, 2017.
  • [43] Q. Zhang, R. Cao, S. Zhang, M. Edmonds, Y. N. Wu, and S.-C. Zhu. Interactively transferring cnn patterns for part localization. In arXiv:1708.01783, 2017.
  • [44] Q. Zhang, W. Wang, and S.-C. Zhu. Examining cnn representations with respect to dataset bias. In AAAI, 2018.
  • [45] Q. Zhang, Y. N. Wu, and S.-C. Zhu. A cost-sensitive visual question-answer framework for mining a deep and-or object semantics from web images. In arXiv:1708.03911, 2017.
  • [46] Q. Zhang, Y. N. Wu, and S.-C. Zhu. Interpretable convolutional neural networks. In CVPR, 2018.
  • [47] Q. Zhang, Y. Yang, Y. N. Wu, and S.-C. Zhu. Interpreting cnns via decision trees. In arXiv:1802.00121, 2018.
  • [48] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Object detectors emerge in deep scene cnns. In ICRL, 2015.
  • [49] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning deep features for discriminative localization. In CVPR, 2016.