Convolutional neural networks (CNNs) [15, 12, 7] have achieved superior performance in many visual tasks, such as object classification and detection. As discussed in Bau et al. , besides the discrimination power, model interpretability is another crucial issue for neural networks. However, the interpretability is always an Achilles’ heel of CNNs, and has presented considerable challenges for decades.
In this paper, we focus on a new problem, i.e. without any additional human supervision, can we modify a CNN to obtain interpretable knowledge representations in its conv-layers? We expect the CNN has a certain introspection of its representations during the end-to-end learning process, so that the CNN can regularize its representations to ensure high interpretability. Our learning for high interpretability is different from conventional off-line visualization [34, 17, 24, 4, 5, 21] and diagnosis [2, 10, 14, 18] of pre-trained CNN representations.
Bau et al.  defined six kinds of semantics in CNNs, i.e. objects, parts, scenes, textures, materials, and colors. In fact, we can roughly consider the first two semantics as object-part patterns with specific shapes, and summarize the last four semantics as texture patterns without clear contours. Moreover, filters in low conv-layers usually describe simple textures, whereas filters in high conv-layers are more likely to represent object parts.
Therefore, in this study, we aim to train each filter in a high conv-layer to represent an object part. Fig. 1 shows the difference between a traditional CNN and our interpretable CNN. In a traditional CNN, a high-layer filter may describe a mixture of patterns, i.e. the filter may be activated by both the head part and the leg part of a cat. Such complex representations in high conv-layers significantly decrease the network interpretability. In contrast, the filter in our interpretable CNN is activated by a certain part. In this way, we can explicitly identify which object parts are memorized in the CNN for classification without ambiguity. The goal of this study can be summarized as follows.
We propose to slightly revise a CNN to improve its interpretability, which can be broadly applied to CNNs with different structures.
We do not need any annotations of object parts or textures for supervision. Instead, our method automatically pushes the representation of each filter towards an object part.
The interpretable CNN does not change the loss function on the top layer and uses the same training samples as the original CNN.
As an exploratory research, the design for interpretability may decrease the discrimination power a bit, but we hope to limit such a decrease within a small range.
Methods: Given a high conv-layer in a CNN, we propose a simple yet effective loss for each filter in the conv-layer to push the filter towards the representation of an object part. As shown in Fig. 2, we add a loss for the output feature map of each filter. The loss encourages a low entropy of inter-category activations and a low entropy of spatial distributions of neural activations. I.e. each filter must encode a distinct object part that is exclusively contained by a single object category, and the filter must be activated by a single part of the object, rather than repetitively appear on different object regions. For example, the left eye and the right eye may be represented using two different part filters, because contexts of the two eyes are symmetric, but not the same. Here, we assume that repetitive shapes on various regions are more prone to describe low-level textures (e.g. colors and edges), instead of high-level parts.
The value of network interpretability: The clear semantics in high conv-layers is of great importance when we need human beings to trust a network’s prediction. In spite of the high accuracy of neural networks, human beings usually cannot fully trust a network, unless it can explain its logic for decisions, i.e. what patterns are memorized for prediction. Given an image, current studies for network diagnosis [5, 21, 18] localize image regions that contribute most to network predictions at the pixel level. In this study, we expect the CNN to explain its logic at the object-part level. Given an interpretable CNN, we can explicitly show the distribution of object parts that are memorized by the CNN for object classification.
Contributions: In this paper, we focus on a new task, i.e. end-to-end learning a CNN whose representations in high conv-layers are interpretable. We propose a simple yet effective method to modify different types of CNNs into interpretable CNNs without any additional annotations of object parts or textures for supervision. Experiments show that our approach has significantly improved the object-part interpretability of CNNs.
2 Related work
The interpretability and the discrimination power are two important properties of a model . In recent years, different methods are developed to explore the semantics hidden inside a CNN. Many statistical methods [28, 33, 1] have been proposed to analyze CNN features.
Network visualization: Visualization of filters in a CNN is the most direct way of exploring the pattern hidden inside a neural unit. [34, 17, 24] showed the appearance that maximized the score of a given unit. up-convolutional nets  were used to invert CNN feature maps to images.
Some studies go beyond passive visualization and actively retrieve certain units from CNNs for different applications. Like the extraction of mid-level features  from images, pattern retrieval mainly learns mid-level representations from conv-layers. Zhou et al. [38, 39] selected units from feature maps to describe “scenes”. Simon et al. discovered objects from feature maps of unlabeled images , and selected a certain filter to describe each semantic part in a supervised fashion .  extracted certain neural units from a filter’s feature map to describe an object part in a weakly-supervised manner.  used a gradient-based method to interpret visual question-answering models. Studies of [11, 31, 29, 16] selected neural units with specific meanings from CNNs for various applications.
Many methods have been developed to diagnose representations of a black-box model. The LIME method proposed by Ribeiro et al. , influence functions  and gradient-based visualization methods [5, 21] and  extracted image regions that were responsible for each network output, in order to interpret network representations. These methods require people to manually check image regions accountable for the label prediction for each testing image.  extracted relationships between representations of various categories from a CNN. Lakkaraju et al.  and Zhang et al.  explored unknown knowledge of CNNs via active annotations and active question-answering. In contrast, given an interpretable CNN, people can directly identify object parts (filters) that are used for decisions during the inference procedure.
Learning a better representation: Unlike the diagnosis and/or visualization of pre-trained CNNs, some approaches are developed to learn more meaningful representations.  required people to label dimensions of the input that were related to each output, in order to learn a better model. Hu et al.  designed some logic rules for network outputs, and used these rules to regularize the learning process. Stone et al.  learned CNN representations with better object compositionality, but they did not obtain explicit part-level or texture-level semantics. Sabour et al.  proposed a capsule model, which used a dynamic routing mechanism to parse the entire object into a parsing tree of capsules, and each capsule may encode a specific meaning. In this study, we invent a generic loss to regularize the representation of a filter to improve its interpretability. We can analyze the interpretable CNN from the perspective of information bottleneck  as follows. 1) Our interpretable filters selectively model the most distinct parts of each category to minimize the conditional entropy of the final classification given feature maps of a conv-layer. 2) Each filter represents a single part of an object, which maximizes the mutual information between the input image and middle-layer feature maps (i.e. “forgetting” as much irrelevant information as possible).
Given a target conv-layer of a CNN, we expect each filter in the conv-layer to be activated by a certain object part of a certain category, and keep inactivated on images of other categories. Let denote a set of training images, where represents the subset that belongs to category , (). Theoretically, we can use different types of losses to learn CNNs for multi-class classification, single-class classification (i.e. for images of a category and for random images), and other tasks.
Fig. 2 shows the structure of our interpretable conv-layer. In the following paragraphs, we focus on the learning of a single filter in the target conv-layer. We add a loss to the feature map of the filter
after the ReLu operation. The feature mapis an matrix, . Because ’s corresponding object part may appear at different locations in different images, we design templates for . As shown in Fig. 3, each template is also an matrix, and it describes the ideal distribution of activations for the feature map when the target part mainly triggers the -th unit in .
During the forward propagation, given each input image , the CNN selects a specific template from the template candidates as a mask to filter out noisy activations from . I.e. we compute and , where denotes the Hadamard (element-wise) product. , denotes the unit (or location) in potentially corresponding to the part.
The mask operation supports the gradient back-propagation for end-to-end learning. Note that the CNN may select different templates for different input images. Fig. 4 visualizes the masks chosen for different images, as well as the original and masked feature maps.
During the back-propagation process, our loss pushes filter to represent a specific object part of the category and keep silent on images of other categories. Please see Section 3.1 for the determination of the category for filter . Let denote feature maps of after an ReLU operation, which are computed on different training images. Given an input image , if , we expect the feature map to exclusively activated at the target part’s location; otherwise, the feature map keeps inactivated. In other words, if , the feature map is expected to the assigned template ; if , we design a negative template and hope the feature map matches to . Note that during the forward propagation, our method omits the negative template, and all feature maps, including those of other categories, select positive templates as masks.
Thus, each feature map is supposed to be well fit to one of all the template candidates . We formulate the loss for as the mutual information between and .
The prior probability of a template is given as, where is a constant prior likelihood. The fitness between a feature map and a template is measured as the conditional likelihood .
where . indicates the multiplication between and ; indicates the trace of a matrix, and . .
Part templates: As shown in Fig. 3, a negative template is given as , , where is a positive constant. A positive template corresponding to is given as , , where denotes the L-1 norm distance; is a constant parameter.
We train the interpretable CNN via an end-to-end manner. During the forward-propagation process, each filter in the CNN passes its information in a bottom-up manner, just like traditional CNNs. During the back-propagation process, each filter in an interpretable conv-layer receives gradients w.r.t. its feature map from both the final task loss and the local filter loss , as follows:
where is a weight.
We compute gradients of w.r.t. each element of feature map as follows222Please see the proof in the Appendix..
where is the target template for feature map . If the given image belongs to the target category of filter , then , where . If image belongs to other categories, then . Considering , after initial learning episodes, we make the above approximation to simplify the computation. Because is computed using numerous feature maps, we can roughly treat as a constant to compute gradients computation in the above equation. We gradually update the value of during the training process333We can use a subset of feature maps to approximate the value of , and continue to update when we receive more feature maps during the training process. Similarly, we can approximate using a subset of feature maps. We compute .. Similarly, we can also approximate without huge computation33footnotemark: 3.
Determining the target category for each filter: We need to assign each filter with a target category to approximate gradients in Eqn. (4). We simply assign the filter with the category whose images activate most, i.e. .
4 Understanding of the loss
In fact, the loss in Eqn. (1) can be re-written as22footnotemark: 2
In the above equation, the first term is a constant, which denotes the prior entropy of part templates.
Low inter-category entropy: The second term is computed as
where , . This term encourages a low conditional entropy of inter-category activations, i.e. a well-learned filter needs to be exclusively activated by a certain category and keep silent on other categories. We can use a feature map of to identify whether the input image belongs to category or not, i.e. fitting to either or , without great uncertainty. Here, we define the set of all positive templates as a single label to represent category . We use the negative template to denote other categories.
Low spatial entropy: The third term in Eqn. (5) is given as
where . This term encourages a low conditional entropy of spatial distribution of ’s activations. I.e. given an image , a well-learned filter should only be activated by a single region of the feature map , instead of repetitively appearing at different locations.
In experiments, to demonstrate the broad applicability, we applied our method to CNNs with four types of structures. We used object images in three different benchmark datasets to learn interpretable CNNs for single-category classification and multi-category classification. We visualized feature maps of filters in interpretable conv-layers to illustrate semantic meanings of these filters. We used two types of metrics, i.e. the object-part interpretability and the location stability, to evaluate the clarity of the part semantics of a convolutional filter. Experiments showed that filters in our interpretable CNNs were much more semantically meaningful than those in ordinary CNNs.
Three benchmark datasets: Because we needed ground-truth annotations of object landmarks444To avoid ambiguity, a landmark is referred to as the central position of a semantic part (a part with an explicit name, e.g. a head, a tail). In contrast, the part corresponding to a filter does not have an explicit name. (parts) to evaluate the semantic clarity of each filter, we chose three benchmark datasets with landmark44footnotemark: 4/part annotations for training and testing, including the ILSVRC 2013 DET Animal-Part dataset , the CUB200-2011 dataset , and the Pascal VOC Part dataset . As discussed in [3, 36], non-rigid parts of animal categories usually present great challenges for part localization. Thus, we followed [3, 36] to select the 37 animal categories in the three datasets for evaluation.
All the three datasets provide ground-truth bounding boxes of entire objects. For landmark annotations, the ILSVRC 2013 DET Animal-Part dataset  contains ground-truth bounding boxes of heads and legs of 30 animal categories. The CUB200-2011 dataset  contains a total of 11.8K bird images of 200 species, and the dataset provides center positions of 15 bird landmarks. The Pascal VOC Part dataset  contain ground-truth part segmentations of 107 object landmarks in six animal categories.
Four types of CNNs: To demonstrate the broad applicability of our method, we modified four typical CNNs, i.e. the AlexNet , the VGG-M , the VGG-S , the VGG-16 , into interpretable CNNs. Considering that skip connections in residual networks  usually make a single feature map encode patterns of different filters, in this study, we did not test the performance on residual networks to simplify the story. Given a certain CNN structure, we modified all filters in the top conv-layer of the original network into interpretable ones. Then, we inserted a new conv-layer with filters above the original top conv-layer, where is the channel number of the input of the new conv-layer. We also set filters in the new conv-layer as interpretable ones. Each filter was a tensor with a bias term. We added zero padding to input feature maps to ensure that output feature maps were of the same size as the input.
Implementation details: We set parameters as , , and . We updated weights of filter losses w.r.t. magnitudes of neural activations in an online manner,
. We initialized parameters of fully-connected (FC) layers and the new conv-layer, and loaded parameters of other conv-layers from a traditional CNN that was pre-trained using 1.2M ImageNet images in[12, 25]. We then fine-tuned the interpretable CNN using training images in the dataset. To enable a fair comparison, traditional CNNs were also fine-tuned by initializing FC-layer parameters and loading conv-layer parameters.
We learned four types of interpretable CNNs based on the AlexNet, VGG-M, VGG-S, and VGG-16 structures to classify each category in the ILSVRC 2013 DET Animal-Part dataset, the CUB200-2011 dataset , and the Pascal VOC Part dataset . Besides, we also learned ordinary AlexNet, VGG-M, VGG-S, and VGG-16 networks using the same training data for comparison. We used the logistic log loss for single-category classification. Following experimental settings in [36, 37, 35], we cropped objects of the target category based on their bounding boxes as positive samples with ground-truth labels . We regarded images of other categories as negative samples with ground-truth labels .
Multi-category classification: We used the six animal categories in the Pascal VOC Part dataset  and the thirty categories in the ILSVRC 2013 DET Animal-Part dataset  respectively, to learn CNNs for multi-category classification. We learned interpretable CNNs based on the VGG-M, VGG-S, and VGG-16 structures. We tried two types of losses, i.e. the softmax log loss and the logistic log loss555We considered the output for each category independent to outputs for other categories, thereby a CNN making multiple independent single-class classifications for each image. Table 7 reported the average accuracy of the multiple classification outputs of an image. for multi-class classification.
5.2 Quantitative evaluation of part interpretability
As discussed in , filters in low conv-layers usually represent simple patterns or object details (e.g. edges, simple textures, and colors), whereas filters in high conv-layers are more likely to represent complex, large-scale parts. Therefore, in experiments, we evaluated the clarity of part semantics for the top conv-layer of a CNN. We used the following two metrics for evaluation.
5.2.1 Evaluation metric: part interpretability
We followed the metric proposed by Bau et al. 
to measure the object-part interpretability of filters. We briefly introduce this evaluation metric as follows. For each filter, we computed its feature maps after ReLu/mask operations on different input images. Then, the distribution of activation scores in all positions of all feature maps was computed.  set an activation threshold such that , so as to select top activations from all spatial locations of all feature maps as valid map regions corresponding to ’s semantics. Then,  scaled up low-resolution valid map regions to the image resolution, thereby obtaining the receptive field (RF)666Note that  accurately computes the RF when the filter represents an object part, and we used RFs computed by  for filter visualization in Fig. 5. However, when a filter in an ordinary CNN does not have consistent contours, it is difficult for  to align different images to compute an average RF. Thus, for ordinary CNNs, we simply used a round RF for each valid activation. We overlapped all activated RFs in a feature map to compute the final RF as mentioned in . For a fair comparison, in Section , we uniformly applied these RFs to both interpretable CNNs and ordinary CNNs. of valid activations on each image. The RF on image , denoted by , described the part region of .
The compatibility between each filter and the -th part on image was reported as an intersection-over-union score , where denotes the ground-truth mask of the -th part on image . Given an image , we associated filter with the -th part if . Note that the criterion of for part association is much stricter than that was used in . It is because compared to other CNN semantics discussed in  (such as colors and textures), object-part semantics requires a stricter criterion. We computed the probability of the -th part being associating with the filter as . Note that one filter might be associated with multiple object parts in an image. Among all parts, we reported the highest probability of part association as the interpretability of filter , i.e. .
For single-category classification, we used testing images of the target category for evaluation. In the Pascal VOC Part dataset , we used four parts for the bird category. We merged ground-truth regions of the head, beak, and l/r-eyes as the head part, merged regions of the torso, neck, and l/r-wings as the torso part, merged regions of l/r-legs/feet as the leg part, and used tail regions as the fourth part. We used five parts for the cat category. We merged regions of the head, l/r-eyes, l/r-ears, and nose as the head part, merged regions of the torso and neck as the torso part, merged regions of frontal l/r-legs/paws as the frontal legs, merged regions of back l/r-legs/paws as the back legs, and used the tail as the fifth part. We used four parts for the cow category, which were defined in a similar way to the cat category. We added l/r-horns to the head part and omitted the tail part. We applied five parts of the dog category in the same way as the cat category. We applied four parts of both the horse and sheep categories in the same way as the cow category. We computed the average part interpretability over all filters for evaluation.
For multi-category classification, we first assigned each filter with a target category , i.e. the category that activated the filter most . Then, we computed the object-part interpretability using images of category , as introduced above.
|Network||Logistic log loss55footnotemark: 5||Softmax log loss|
5.2.2 Evaluation metric: location stability
The second metric measures the stability of part locations, which was proposed in . Given a feature map of filter , we regarded the unit with the highest activation as the location inference of . We assumed that if consistently represented the same object part through different objects, then distances between the inferred part location and some object landmarks44footnotemark: 4 should not change a lot among different objects. For example, if represented the shoulder, then the distance between the shoulder and the head should keep stable through different objects.
Therefore,  computed the deviation of the distance between the inferred position and a specific ground-truth landmark among different images, and used the average deviation w.r.t. various landmark to evaluate the location stability of . A smaller deviation indicates a higher location stability. Let denote the normalized distance between the inferred part and the -th landmark on image , where denotes the center of the unit ’s RF when we backward propagated the RF to the image plane. denotes the diagonal length of the input image. We computed as the relative location deviation of filter w.r.t. the -th landmark, where is referred to as the variation of the distance . Because each landmark could not appear in all testing images, for each filter , we only used inference results with the top-100 highest activation scores on images containing the -th landmark to compute . Thus, we used the average of relative location deviations of all the filters in a conv-layer w.r.t. all landmarks, i.e. , to measure the location instability of , where denotes the number of landmarks.
More specifically, object landmarks for each category were selected as follows. For the ILSVRC 2013 DET Animal-Part dataset , we used the head and frontal legs of each category as landmarks for evaluation. For the Pascal VOC Part dataset , we selected the head, neck, and torso of each category as the landmarks. For the CUB200-2011 dataset , we used ground-truth positions of the head, back, tail of birds as landmarks. It was because these landmarks appeared on testing images most frequently.
|Network||Avg. location instability|
|Dataset||ILSVRC Part ||Pascal VOC Part |
|Network||Logistic log loss55footnotemark: 5||Logistic log loss55footnotemark: 5||Softmax log loss|
For multi-category classification, we needed to determine two terms for each filter , i.e. 1) the category that mainly represented and 2) the relative location deviation w.r.t. landmarks in ’s target category. Because filters in ordinary CNNs did not exclusively represent a single category, we simply assigned filter with the category whose landmarks can achieve the lowest location deviation to simplify the computation. I.e. we used the average location deviation to evaluate the location stability, where denotes the set of part indexes belonging to category .
5.2.3 Experimental results and analysis
Tables 1 and 2 compare part interpretability of CNNs for single-category classification and that of CNNs for multi-category classification, respectively. Tables 3, 4, and 5 list average relative location deviations of CNNs for single-category classification. Table 6 compares average relative location deviations of CNNs for multi-category classification. Our interpretable CNNs exhibited much higher interpretability and much better location stability than ordinary CNNs in almost all comparisons. Table 7
compares classification accuracy of different CNNs. Ordinary CNNs performed better in single-category classification. Whereas, for multi-category classification, interpretable CNNs exhibited superior performance to ordinary CNNs. The good performance in multi-category classification may be because that the clarification of filter semantics in early epochs reduced difficulties of filter learning in later epochs.
5.3 Visualization of filters
We followed the method proposed by Zhou et al.  to compute the RF of neural activations of an interpretable filter, which was scaled up to the image resolution. Fig. 5 shows RFs66footnotemark: 6 of filters in top conv-layers of CNNs, which were trained for single-category classification. Filters in interpretable CNNs were mainly activated by a certain object part, whereas filters in ordinary CNNs usually did not have explicit semantic meanings. Fig. 6 shows heat maps for distributions of object parts that were encoded in interpretable filters. Interpretable filters usually selectively modeled distinct object parts of a category and ignored other parts.
|ILSVRC Part||VOC Part||ILSVRC Part||VOC Part||CUB200|
|logistic55footnotemark: 5||logistic55footnotemark: 5||softmax|
6 Conclusion and discussions
In this paper, we have proposed a general method to modify traditional CNNs to enhance their interpretability. As discussed in , besides the discrimination power, the interpretability is another crucial property of a network. We design a loss to push a filter in high conv-layers toward the representation of an object part without additional annotations for supervision. Experiments have shown that our interpretable CNNs encoded more semantically meaningful knowledge in high conv-layers than traditional CNNs.
In future work, we will design new filters to describe discriminative textures of a category and new filters for object parts that are shared by multiple categories, in order to achieve a higher model flexibility.
M. Aubry and B. C. Russell.
Understanding deep features with computer-generated imagery.In ICCV, 2015.
-  D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network dissection: Quantifying interpretability of deep visual representations. In CVPR, 2017.
-  X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In CVPR, 2014.
-  A. Dosovitskiy and T. Brox. Inverting visual representations with convolutional networks. In CVPR, 2016.
-  R. C. Fong and A. Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In arXiv:1704.03296v1, 2017.
-  Y. Goyal, A. Mohapatra, D. Parikh, and D. Batra. Towards transparent ai systems: Interpreting visual question answering models. In arXiv:1608.08974v2, 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
-  Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. P. Xing. Harnessing deep neural networks with logic rules. In arXiv:1603.06318v2, 2016.
V. K. Ithapu.
Decoding the deep: Exploring class hierarchies of deep
representations using multiresolution matrix factorization.
In CVPR Workshop on Explainable Computer Vision and Job Candidate Screening Competition, 2017.
-  P. Koh and P. Liang. Understanding black-box predictions via influence functions. In ICML, 2017.
S. Kolouri, C. E. Martin, and H. Hoffmann.
Explaining distributed neural activations via unsupervised learning.In CVPR Workshop on Explainable Computer Vision and Job Candidate Screening Competition, 2017.
-  A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
-  D. Kumar, A. Wong, and G. W. Taylor. Explaining the unexplained: A class-enhanced attentive response (clear) approach to understanding deep neural networks. In CVPR Workshop on Explainable Computer Vision and Job Candidate Screening Competition, 2017.
-  H. Lakkaraju, E. Kamar, R. Caruana, and E. Horvitz. Identifying unknown unknowns in the open world: Representations and policies for guided exploration. In AAAI, 2017.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 1998.
B. J. Lengerich, S. Konam, E. P. Xing, S. Rosenthal, and M. Veloso.
Visual explanations for convolutional neural networks via input
In ICML Workshop on Visualization for Deep Learning, 2017.
-  A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. In CVPR, 2015.
-  M. T. Ribeiro, S. Singh, and C. Guestrin. “why should i trust you?” explaining the predictions of any classifier. In KDD, 2016.
-  A. S. Ross, M. C. Hughes, and F. Doshi-Velez. Right for the right reasons: Training differentiable models by constraining their explanations. In arXiv:1703.03717v1, 2017.
-  S. Sabour, N. Frosst, and G. E. Hinton. Dynamic routing between capsules. In NIPS, 2017.
-  R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In arXiv:1610.02391v3, 2017.
-  M. Simon and E. Rodner. Neural activation constellations: Unsupervised part model discovery with convolutional networks. In ICCV, 2015.
-  M. Simon, E. Rodner, and J. Denzler. Part detector discovery in deep convolutional neural networks. In ACCV, 2014.
-  K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: visualising image classification models and saliency maps. In arXiv:1312.6034, 2013.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
-  S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. In ECCV, 2012.
-  A. Stone, H. Wang, Y. Liu, D. S. Phoenix, and D. George. Teaching compositionality to cnns. In CVPR, 2017.
-  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In arXiv:1312.6199v4, 2014.
-  C. Ventura, D. Masip, and A. Lapedriza. Interpreting cnn models for apparent personality trait regression. In CVPR Workshop on Explainable Computer Vision and Job Candidate Screening Competition, 2017.
-  C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltech-ucsd birds-200-2011 dataset. Technical report, In California Institute of Technology, 2011.
-  A. S. Wicaksana and C. C. S. Liem. Human-explainable features for job candidate screening prediction. In CVPR Workshop on Explainable Computer Vision and Job Candidate Screening Competition, 2017.
-  N. Wolchover. New theory cracks open the black box of deep learning. In Quanta Magazine, 2017.
-  J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In NIPS, 2014.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
-  Q. Zhang, R. Cao, F. Shi, Y. Wu, and S.-C. Zhu. Interpreting cnn knowledge using an explanatory graph. In arXiv 1708.01785, 2017.
-  Q. Zhang, R. Cao, Y. N. Wu, and S.-C. Zhu. Growing interpretable graphs on convnets via multi-shot learning. In AAAI, 2016.
-  Q. Zhang, R. Cao, Y. N. Wu, and S.-C. Zhu. Mining part concepts from cnns via active question-answering. In CVPR, 2017.
-  B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Object detectors emerge in deep scene cnns. In ICRL, 2015.
-  B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning deep features for discriminative localization. In CVPR, 2016.
Proof of equations