Understanding Intra-Class Knowledge Inside CNN

07/09/2015 ∙ by Donglai Wei, et al. ∙ 0

Convolutional Neural Network (CNN) has been successful in image recognition tasks, and recent works shed lights on how CNN separates different classes with the learned inter-class knowledge through visualization. In this work, we instead visualize the intra-class knowledge inside CNN to better understand how an object class is represented in the fully-connected layers. To invert the intra-class knowledge into more interpretable images, we propose a non-parametric patch prior upon previous CNN visualization models. With it, we show how different "styles" of templates for an object class are organized by CNN in terms of location and content, and represented in a hierarchical and ensemble way. Moreover, such intra-class knowledge can be used in many interesting applications, e.g. style-based image retrieval and style-based object completion.



There are no comments yet.


page 1

page 2

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep Convolutional neural networks (CNN) [6] achieve the state-of-the-art performance at recognition tasks. Recent works [12, 1, 11] have focused on understanding the inter-class discriminative power of CNN. In particular,  [13]

shows that individual neurons in different convolutional layers correspond to texture patterns with various level of abstraction and even object detectors can be found in the last feature extraction layer.

However, little is known about how CNN represent an object class or how it captures the intra-class variation. For example, in the object class of “orange” and “pool table”, there are drastically different “styles” of the object instances which CNN recognizes correctly (Fig. 1). There are two main challenges of this problem. One is to visualize the knowledge numerically instead of directly retrieving natural images, which can be biased towards the image database that is in use. The other challenge is that such intra-class knowledge is captured collectively by a group of neurons, namely “neural pathway”, instead of a single neuron studied in previous works.

In this work, we make progress on both challenges by (1) introducing a patch prior to improve parametric CNN visualization models, (2) analyzing how the spatial and style intra-class knowledge are encoded inside CNN in a hierarchical and ensemble way. With this learned knowledge, we can retrieve images or complete images in a novel way. Our techniques apply to a range of feedfoward architectures and we here focus on the CNN [5]

trained on the large-scale ImageNet challenge dataset 

[4], with 5 convolutional layers followed by 3 fully-connected layers.

Figure 1: Examples of intra-class variation. We show two different styles of the object class “orange” and “pool table” with the retrieved images and our new visualization method.

2 Related Work

Below, we survey works on understanding fully-connected layers and intra-class knowledge discovery.

2.1 Fully-Connected Layers in CNN

Understanding Below are some recent understandings of fully-connected layers. (1) Dropout Techinques.  [5] consider the dropout technique as an approximation of learning ensemble models and  [2] proves its equivalence to a regularization; (2) Binary Code.  [1] discovers that the biniary mask of the features from fc layers are good enough for classification. (3) Pool5. features contain object parts information with spatial and semantic. we can combine them by selecting sub-matrices in (4) Image Retrival from fc: fc is used as semantic space

Visualization Unlike features in convolutional layers where we can recover most of the original images with parametric [12, 8] or non-parametric methods, features from fully-connected are hard to invert. As shown in [8], the location and style information of the object parts are lost. Another work [10] inverts the class-specific feature from fc layer which is 0 except the target class. The output image from numerical optimization is a composite of various object templates. Both these works follow the same model framework (compared in Sec. 3.1) which can be solved efficiently with gradient descend method.

2.2 Intra-class Knowledge Discovery

Understanding image collections is a relatively unexplored task, although there is growing interest in this area. Several methods attempt to represent the continuous variaation in an image class using sub-spaces or manifolds. Unlike this work, we investigate discrete, name- able transformations, like crinkling, rather than working in a hard-to-interpret parameter space. Photo collections have also been mined for storylines as well as spatial and temporal trends, and systems have been proposed for more general knowledge discovery from big visual data.  [9] focuses on physical state transformations, and in addition to discovering states it also studies state pairs that define a transformation.

In Sec. 3, we analyze the problem of current parametric CNN visualization models and propose a data-driven patch prior to generate images with natural color distribution. In Sec. 4, we decompose the fully-connected layers into four different components, which are shown to capture the the location-specific and content-specific intra-class variation, or represent such knowledge in a hierarchical and ensemble way. In Sec. 5, we first provide both quantitative and qualitative results for our new visualization methods. We apply the learned intra-class knowledge inside CNN to organize an unlabelled image collection and to fill in image masks with objects of various styles.

Figure 2: Illustration of the color problem in [10]. The results for (a) pool feature inversion and (b) class visualization have unnatural global color distribution. In (c), we show six output images with different global color distribution, but similar pool5 features differed less than 1% from each other.

3 CNN Visualization with Patch Prior

Below, we propose a data-driven patch prior to improve parametric CNN visualization models [8, 10] and we show improvement for both cases (Fig. 3redb).

3.1 Parametric Visualization Model

We first consider the task of feature inversion [8]. Given the CNN feature (e.g. pool) of a natural image, the goal is to invert it back to an image close to the original.  [8] aims to find an optimal image that minimizes the sum of the data energy from feature reconstruction error and a regularization energy

for the estimation.


where is the CNN feature from layer , is the target feature for inversion. Similarly, another CNN visualization task, class visualization [10], follows a similar formulation, where the goal is to generate an image given the class label .



is the binary vector with only the

-th element one.

For the regularization term , the -norm of the image  [10] and the pairwise gradient [8] are used. Unlike low-level vision reconstruction (e.g. denoising), the data energy from CNN is less sensitive to low-frequency image content, which leads to multiple global optima with unnatural color distribution. Given the input image (Fig. 2reda), we show a collection of pop-art style images whose pool features are less than 1% from the input (Fig. 2redc). These images are generated from [8], initialized from the input image with shuffled RGB channels. In practice,  [10, 8]

initialize the optimization from the mean image with or without white noise, and the gradient descend algorithm converges to one of the global optima whose color distribution can be far from being natural (Fig. 


Figure 3: Illustration of the local optima of pool5 feature inversion. We show six output images with different global color distribution, but similar pool5 features differed less than 1% from each other.

3.2 Data-driven Patch Prior

To regularize the color distribution for CNN visualization, we build an external database of natural patches and minimize the distance of patches from the output to those in the database. As the patches from the CNN visualization models above are lack of low-frequency components, we calculate the distance between patches after global normalization w.r.t the mean and std of the whole image respectively. Combined with previous regularization models, our final image regularization model is


where are weight parameters for each term, is the patch index, are the densely sampled normalized patches and are the nearest normalized patches from a natural patch database. In practice, we iteratively solve the continuous optimizaiton for given the matched patches and the discrete optimization for with patch match given the previous estimate of .

To illustrate the effectiveness of the patch prior, we compute the dense patch correspondence from the patch database to a pool feature inversion result (Fig. 3reda)  [8], and visualize the warped image which regularizes the output image in Eqn. 3. We compare the patch matching quality with and without normalization. As expected, the normalized patches have better chance to retrieve natural patches, and the warped result is reasonable despite the unnatural color distribution of the initial estimation.

Below, we describe how to build an effective patch database. The object class visualization task has no ground truth color distribution and we can directly sample patches from validation images from the same class. For feature inversion, however, such approach can be costly due to the intra-class variation of each object class, where images from the same class may not match well. As discovered in [7], conv-layer features can be used to retrieve image patches with similar appearance, though their semantic can be totally different. Thus, we build a database of 1M pairs of 1x1x256 pool5 features and the center 67x67x3 patch of the original patch support (195x195x3). Given the pool feature to invert, we build our patch database with such retrieval methods and we show the averaged patches (10-NN at each pool location) recovers well the color distribution of the input image (Fig. 3reda).

4 Discover CNN Intra-class Knowledge

Figure 4: Illustration of the organization of Sec. 4. We decompose the fully-connected layers into four components and each subsection explains how the intra-class knowledge is captured and represented.

For the class visualization task [10], notice that the back-propagated gradient from the CNN (data energy) in Eqn. 2 is a series of matrix multiplication.


where is the -th row of ,

are the relu mask computed on

for fc during feedfoward stage.

Given the learned weights (), we can turn on/off units from the mask to sample different structure of the fc layers (“neural pathways”) by multiplying different sub-matrices from the learned weights. Another view is that the class-specific information is stored in , and it can be decoded by different structures through the relu mask . [10] uses all the weights in , which leads to a composite template of object parts of different styles in all places (Fig. 2redb).

Below, by controlling the mask , we show that CNN captures two kinds of intra-class knowledge (location and style), which is encoded with an ensemble and hierarchical representation (Fig.  4).

4.1 Location-variation ()

The output from the last convolutional layer is pool, which is semantically shown to be effective as object detectors [13]. Pool features (and its relu mask ) have the 6x6 spatial dimension and we can visualize an object class within a certain receptive field (RF) by only opening a subset of spatial dimensions (e.g. patches) during optimizing Eqn. 2.

In Fig. 5, we show that the “terrier” class doesn’t have much variation at each RF, as it learns the dog head uniformly. On the other hand, the “monastery” class displays heterogeneity, as it learns domes at the top of the image, windows in the middle and doors at the bottom.

Figure 5: Illustration of location-based variation. learn different spatial prior.

4.2 Content-variation ()

fc has been used as the image semantic space and it has been reported indicative for image retrieval.

Semantic Space as Convex Cone in fc Notice that fc is a linear combination of fc. Thus, in the fc feature space, if two feature vectors and have the same predicted top-1 class, then any feature vector (linear cone) will have the same top-1 prediction

Thus, given the training examples Namely if two linear polytope (NMF) In Fig. 6, we show the clusters result of the training examples, which capture different pose or content of the object, which we calls the “style” of the object.

fc Topic Visualization Given the learned fc topic above, we can apply its relu mask to during optimizing Eqn. 2.

Figure 6: Visualization of various topics learned by CNN with retrieved images and our visualization. These templates capture intra-class variation of an object class: (a) scale, (b) angle, (c) color, (d) status and (e) content.

4.3 Ensemble Encoding ()

During training, the dropout trick makes CNN an ensemble model by randomly setting 50% of the fc features to be 0, which is equivalent to turn off half of . Below, we try to understand what each single model learns by reconstructing images according to Eqn. 2. We randomly sample 2 pairs of correspond to different styles of the objects and reconstruct the image with 2 different random initialization (Fig. 7). Interestingly, different models captures different style of the object, where the variation across random initialization has smaller effect on the style.

Figure 7: Visualization of ensemble encoding. We show that different dropout model captures different aspects of a class in terms of (a) pose, (b) species, (c) spatial layout, and (d) scale

4.4 Hierarchical Encoding ()

Given an image, we can define its binary code by its relu masks .  [1] discovers that these binary code achieves similar classification result as their corresponding features. Similar to dropout model visualization, we invert the hash code by masking weight matrices with these binary hash code, namely constraining CNN to generate images only from these binary masks. We define three different binary hash code representation for an image with increasing amount of constraints: . During optimization, we replace with , and in Eqn. 2 respectively.

Figure 8: Illustration of hierarchical encoding.

5 Experiments

5.1 CNN Visualization Comparison

For CNN feature inversion, we provide qualitative comparison with the previous state-of-the-art [8], and our results look more natural with the help of the patch-prior regularization (Fig. 9reda). For quantitative results, we collect 100 training images from different classes in the validation set of ImageNet. We use the same parameters for both [8] and ours, where the only difference is our patch-prior regularization. In addition, we empirically found that whiten the image as a pre-procession helps to improve the image regularization without much trade-off for the feature reconstruction error. For error metric, we use the relative distance between the input image and the reconstructed image as . We compare our algorithm with two version of [8]: initialized from white noise ( [8]+rand) or the same patch database for ours ( [8][7]). Shown in Table 1, ours achieves significant improvement. Notice that, with the whitening pre-procession and the recommended parameters [8], most runs have feature reconstruction error , and we here focus on one whose estimation is closer to the ground truth.

Figure 9: Qualitative comparison for CNN visualization. We compare (a) on CNN feature (pool) inversion with [8] and (b) on CNN class visualization with [10].

For class visualization task, as there is no ground truth for sampling images from a given class, we provide more qualitative results for different kinds of objects (animal, plant and man-made) in Fig. 9redb. Compared to [10], our visualization results are closer to natural images and are easier to interpret.

Method  [8]+rand  [8][7] Ours
Error 0.51 0.45 0.32
Table 1: Quantitative comparison of pool feature inversion methods. Conditioned on the feature reconstruction error less than a threshold, we compare the distance of the estimated image from the original input. Our method outperform the previous state-of-the-art [8] with two different initializations.

5.2 Image Completion with Learned Styles

Given the mask of an image, we here show the qualitative results on object insertion and modification to explore the potential usage of such object-level knowledge for low-level vision with its top-down semantic understanding of the image.

Object insertion from context Given a scene image (Fig. 10reda),  [3] can only fill in grass texture due to the lack of top-down image understanding. CNN, on the other hand, can predict relevant object class labels due to their co-ocurrence in training images. For this example, the top-1 prediction for the grassland image is “Hay”. Our goal here is to inpaint the hay objects with different styles.

We first follow Sec. 4.2 to learn the styles of hay objects from the Imagenet validation data. We visualize each topic with a natural image retrieved by it in the top row (Fig. 10reda), which correspond to different scales of the hay. Given a fc style topic, we can insert objects in the image by the procedure similar to our fc topic visualization, where only pixels inside the mask are updated with the gradient from Eq. 2. In the second row, we see different styles of hays are blended with the grassland (Fig. 10reda).

Object Modification Besides predicting object class based on context information, CNN can locate the key parts of an object by finding the regions of pixels with high magnitude gradient  [10]. Given an input image of a persian cat (Fig. 10redb), we use a simple thresholding and hole filling to find the support of its key part, the head. Instead of filling the mask with furs as PatchMatch does, CNN predicts the masked image as “Angora” based on the fur information from the body. Following the similar procedure as above, we first find three styles of angoras, which correspond to different sub-species with different physical features (e.g. head color), visualized with retrieved images in the third row. Our object modification result is shown on the bottom row, which change the original persian cat in an interesting way. Notice that the whole object modification pipeline here is automatic and we only need to specify the style of the angora, as the mask is generated from key object part located by CNN.

Figure 10: Image insertion and modification results. Given an image, CNN can not only understand its semantic information to predict the object to insert or the object part to change, but also fill the mask with a specified fc style using the similar technique as fc style visualization.

6 Conclusion

In this work, we analyze how CNN model the intra-class variation for each object class in fully-connected layers through an improved visualization technique. We find CNN not only captures the location-variation and style-variation, but also encodes them in a hierarchical and ensemble way.


  • [1] P. Agrawal, R. Girshick, and J. Malik. Analyzing the performance of multilayer neural networks for object recognition. In Computer Vision–ECCV 2014, pages 329–344. Springer, 2014.
  • [2] P. Baldi and P. J. Sadowski. Understanding dropout. In Advances in Neural Information Processing Systems, pages 2814–2822, 2013.
  • [3] C. Barnes, E. Shechtman, A. Finkelstein, and D. Goldman. Patchmatch: a randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics-TOG, 28(3):24, 2009.
  • [4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In

    Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on

    , pages 248–255. IEEE, 2009.
  • [5] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [6] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • [7] J. L. Long, N. Zhang, and T. Darrell. Do convnets learn correspondence? In Advances in Neural Information Processing Systems, pages 1601–1609, 2014.
  • [8] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. CVPR, 2015.
  • [9] J. L. P. Isola and E. Adelson. Discovering states and transformations in image collections. CVPR, 2015.
  • [10] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. ICLR, 2014.
  • [11] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  • [12] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, pages 818–833. Springer, 2014.
  • [13] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Object detectors emerge in deep scene cnns. ICLR, 2014.