Deep Spatial Pyramid: The Devil is Once Again in the Details

In this paper we show that by carefully making good choices for various detailed but important factors in a visual recognition framework using deep learning features, one can achieve a simple, efficient, yet highly accurate image classification system. We first list 5 important factors, based on both existing researches and ideas proposed in this paper. These important detailed factors include: 1) ℓ_2 matrix normalization is more effective than unnormalized or ℓ_2 vector normalization, 2) the proposed natural deep spatial pyramid is very effective, and 3) a very small K in Fisher Vectors surprisingly achieves higher accuracy than normally used large K values. Along with other choices (convolutional activations and multiple scales), the proposed DSP framework is not only intuitive and efficient, but also achieves excellent classification accuracy on many benchmark datasets. For example, DSP's accuracy on SUN397 is 59.78 state-of-the-art (53.86


Combined Descriptors in Spatial Pyramid Domain for Image Classification

Recently spatial pyramid matching (SPM) with scale invariant feature tra...

Face Image Classification by Pooling Raw Features

We propose a very simple, efficient yet surprisingly effective feature e...

HEp-2 Cell Image Classification with Deep Convolutional Neural Networks

Efficient Human Epithelial-2 (HEp-2) cell image classification can facil...

Pyramid Convolutional RNN for MRI Reconstruction

Fast and accurate MRI image reconstruction from undersampled data is cri...

FrequentNet : A New Deep Learning Baseline for Image Classification

In this paper, we generalize the idea from the method called "PCANet" to...

Fast Low-rank Representation based Spatial Pyramid Matching for Image Classification

Spatial Pyramid Matching (SPM) and its variants have achieved a lot of s...

Detector With Focus: Normalizing Gradient In Image Pyramid

An image pyramid can extend many object detection algorithms to solve de...

Code Repositories

1 Introduction

Feature representation is among the most important topics (if not the most important one) in current state-of-the-art visual recognition tasks. Over the past decade, handcrafted features (e.g., SIFT and HOG) were very popular, and they were often encoded into a high dimensional vector by the Bag-of-Visual-Words (BOVW) framework [18]. The BOVW representation is further improved by the Vector of the Locally Aggregated Descriptors (VLAD) [10] and Fisher Vector (FV) [14]

methods, via adding higher order statistics. However, such features are significantly outperformed by the recent deep features from convolutional neural networks (CNNs), which have exhibited significantly better performance than those handcrafted features in visual recognition.

In spite of the impressive results achieved by deep features, there are many factors which can affect the performance of deep feature representations. A lot of factors exist and many details will have huge impact in CNN feature’s recognition accuracy. Those factors include, for example, how the deep net is trained. Zhou et al. [30] evaluated deep feature’s performance from the same network architecture learned from different training sets (i.e.

, ImageNet and Places data). They achieved high classification performance on scene recognition tasks with the Places-CNN feature. Chatfield

et al. [1] studied other factors, including architectures of deep nets and data augmentation, etc.

After a deep net has been successfully trained, more factors and decisions are awaiting. In other words, how shall we use the deep features for image recognition? Studies have been carried out very recently, and some important details have been worked on. However, a systematic study of “what factors are out there?” and “what choices should be made?” is missing. In this paper, we present our studies to these questions. Specifically, suppose we are given a pre-trained deep CNN model,

  • What are important factors in utilizing this model? Based on existing studies in the literature and our new proposals, we make a list of five important factors.

  • What decisions are the best concerning these factors? We carefully evaluate different choices and present our answers to this question. Some choices (e.g., the choice of size in FV) are quite different from previous practices in the community.

  • What effects do these factors have? We show that they are key to high recognition accuracy. By combining the best choices from the 5 factors we raised, we propose Deep Spatial Pyramid (DSP), a framework that properly utilize deep CNN features. DSP has the following properties:

    • High accuracy. DSP updates the accuracy of many benchmark datasets in our evaluation. For example, it raises the accuracy of SUN 397 from 53.86% [30] to 59.78%, and Caltech 101 from 93.42% [17] to 95.11%. Note that these previous state-of-the-art results are also based on CNN.

    • High efficiency and flexibility. DSP achieves high processing speed, with roughly 150 ms to process an image. DSP also processes images of any aspect ratio or resolution.

    • Small storage cost. The final DSP representation is memory-efficient, with around 12k dimensions. This length is much shorter than existing combination of CNN features and FV / VLAD, and is advantageous in large-scale problems.

We will first present the framework, preliminaries, and the list of important factors in Sec. 2. The study of best decisions for these factors are presented in Sec. 3. However, the study of size is very special, as to have its own Sec. 4. DSP is evaluated as a whole system in Sec. 5, and it is compared with state-of-the-art visual recognition methods. Sec. 6 concludes this paper.

2 The framework and important factors

Our study follow the framework illustrated in Fig. 1. In the first step, we feed an input image with arbitrary resolution into a pre-trained CNN model to extract deep activations. Then, a visual dictionary with dictionary items is trained on the deep descriptors from training images. The third step overlay a spatial pyramid partition to the deep activations of an image into blocks in pyramid levels. One spatial block is represented as a vector by using the improved Fisher Vector. Thus, blocks correspond to FVs. In the fourth and final step, we concatenate the FVs to form a -dimensional feature vector as the final image-level representation.

Figure 1: The image classification framework. DSP feeds an arbitrary resolution input image into a pre-trained CNN model to extract deep activations. A GMM visual dictionary is trained based on the deep descriptors from training images. Then, a spatial pyramid partitions the deep activations of an image into blocks in pyramid levels. In this way, each block activations are represented as a single vector by the improved Fisher Vector. Finally, we concatenate the single vectors to form a -dimensional feature vector as the final image representation.

Our framework does not consider how the pre-trained CNN is obtained or how an image is classified after its representation is obtained. These can be viewed as preliminary factors, and we follow the commonly used decisions for them in the literature.

In practice, some CNN models (e.g., Krizhevsky et al. [11] and Zeiler and Fergus [28]) are popularly used as the deep feature extractor in image related tasks. However, recently neural networks that are even deeper than these are shown to further improve CNN performance, characterized by deeper and wider architectures and smaller convolutional filters when compared to traditional CNN such as  [11, 28]. Examples of deeper nets include GoogLeNet [19] and VGG Net-D [17]. Our work is based on the network architecture released by  [17] (i.e., VGG Net-D). This network consists of 13 layers of

convolutional kernels, with 5 max-pooling layers interspersed, and in the end concluded by 3 fully connected layers. The width of this network starts from 64 in the first layer, increasing by a factor of 2 after each max-pooling layer, until it reaches 512. For the classification, we use a linear SVM classifier.

In the rest of this paper, we follow the notations in [6]. We use the term “feature map” to indicate the convolutional results (after applying the max-pooling) of one filter, the term “activations” to indicate feature maps of all filters in a convolutional layer, and the term “descriptor” to indicate the -dimensional component vector of activations. “pool” refers to the activations of the max-pooled last convolutional layer, and “fc” refers to the activation of the last fully connected layer.

With these preliminaries and notations, we now discuss the important factors inside this framework.

  • Which activation to use? Deep features for an image can be extracted from either the convolutional layers or the fully connected layers of a pre-trained CNN. The original idea is to use the last fully connected layer directly for classification [11]. And recently, activations from the fully convolutional layers have exemplified its value [28, 13, 2, 25]. Which one shall we adopt?

  • How to normalize the deep features before feeding them into a classifier or the next level of processing? It is not yet a common practice to normalize CNN activations. What are the viable choices and which one is the best?

  • How many components in the FV representation? The GMM model in FV consists of Gaussian components. It is known that in general a large (e.g., 256) leads to high accuracy for fully connected activations [7, 27], dense SIFT [14] and action features [23]. However, a large leads to a very long (hundreds of thousands of dimensions) representation. Is a large really necessary?

  • Shall we capture spatial information (and how?) A general CNN requires a fixed input image size. He et al. [9] proposed a SPP-Net to remove the fixed-size constraint, which also inspired a Spatial Pyramid Pooling (SPP). The SPP-Net pooled deep activations of the last convolutional layer and generated fixed length outputs, then the pooled activations were fed into the fully connected layers. Is there a simpler and more natural way to capture spatial information?

  • Shall we use information from multiple scales? Yoo et al. [27] replaces the fully connected layers with equivalent convolutional layers to obtain large amount of dense deep descriptors. Then, all the activations are merged into a single vector by Multi-scale Pyramid Pooling (MPP). MPP utilizes multi-scale CNNs’ activations. MPP, however, is computationally expensive. Is there an efficient way to capture information from multiple scales?

These factors may seem too detailed to be important. However, existing methods adopted very different decisions to these questions, and these differences may well explain their performance differences. We summarize these differences in Table 1.

 Methods DF Resolution Norm PCA K SP Ms
 SPP-net C fixed - - -
 MOP F fixed 100
 MPP C fixed 256
 D-CNN C any 64
 DSP C any 1,2,3,4
Table 1: Summary of decisions in related methods

In Table 1, “DF” refers to deep features, where “F” and “C” represent the fully connected and convolutional layer, respectively. “Norm” refers to how the deep activations are normalized; “K” indicates the number of visual words or Gaussian components; “SP” refers to spatial pyramid; “Ms” refers to multiple scale. In addition, “-” means that a method does not involve the corresponding factor. Some methods also use PCA to reduce the dimensionality of deep activations.

From Table 1, it is clear that the proposed DSP is flexible (accepting any size image), efficient (fully convolutional and very small ), and making full use of the image (spatial pyramid and multiple scales). We will explain how these decisions and choices are made in the next section.

3 Factors, choices and decisions

We study the 5 factors in this section in Sec. 3.13.4, respectively. The effect of size, however, is studied separately in Sec. 4.

3.1 Convolutional vs. fully connected layer

Convolutional neural networks consist of alternatively stacked convolutional layers and pooling layers, followed by one or more fully connected layers. The convolutional layers generate feature maps by linear convolutional filters with nonlinear activation functions such as rectified linear units, then the feature maps max-pool the outputs within local neighborhoods. Finally, the activations of the last convolutional layer are fed into fully connected layers, followed by a soft-max classifier.

However, the feature map of top convolutional layers are known to contain mid- and high-level information, e.g., object parts or complete objects [29]. As shown in Fig. 2, we visualize the input image’s feature maps which are generated by the last convolutional layer. In this figure, the strongest response of the 194th and 207th feature map are corresponding to the person and motorcycle in the input image, respectively. Thus, one major difference between convolutional and fully connected layer activations is that the former is directly embedded with rich semantic information of image patches, while the latter not necessarily be so.

Figure 2: Visualization of the feature maps. (2a) is an image from the PASCAL VOC2007 dataset, (2b) and (2c) are different feature maps of the input image.

Furthermore, the fully connected layers require a fixed image size (e.g., ). On the contrary, convolutional layers accept input images of arbitrary resolution or aspect ratio. The pool

activations can be formulated as a order-3 tensor of size

, which include cells and each cell contains one -dimensional deep descriptor. For example, we will get a activations if the input image size is . Convolutional layer deep descriptors have been successfully in [13, 2, 25].

These deep descriptors contain more spatial information compared to the activation of the fully connected layers, e.g., the top-left cell’s -dim deep descriptor is generated using only the top-left part of the input image, ignoring other pixels. In addition, fully connected layers have large computational cost, because it contains roughly 90 of all the parameters of the whole CNN model.

Thus, in DSP we use a fully convolutional network by removing the fully connected layers.

3.2 Normalization and pooling of deep descriptors

Let () be the matrix of -dimensional deep descriptors extracted from an image via a pre-trained CNN model. was usually processed by dimensionality reduction methods such as PCA, before they are pooled into a single vector using VLAD or FV [7, 27]. PCA is usually applied to the SIFT features or fully connected layer activations, since it is empirically shown to improve the overall recognition performance. However, our experiments show that PCA significantly hurts recognition when applied to the fully convolutional activations. Thus, it is not applied to fully convolutional deep descriptors in this paper.

In addition, each deep descriptors inside is not normalized in current processing of deep visual descriptors [2]. We first try to normalize with the vector normalization (i.e., ), which leads to better results than null normalization on most datasets, except in Stanford40, as shown in Table 2.

We also propose a novel matrix normalization (i.e., ), where is the matrix spectral norm, i.e.

, largest singular value of

. This normalization has a benefit that it normalizes using the information from the entire image . It is a bit surprising to observe that it is more effective than the commonly used vector normalization, and sometimes by a large margin. An intuitive interpretation is that the matrix normalization can use the global information, making it more robust to changes such as illumination and scale.

In order to evaluate the effect of these normalization and PCA for classification performance, we use 4 datasets. We use the original resolution of input images without cropping or warping and pool activations by using FV with (i.e., the GMM has 4 Gaussian components). The experimental results are reported in Table 2. The matrix normalization before using FV is found to be important for better performance.

Caltech101 Stanford40 Scene15 Indoor67
 No 90.63 74.84 90.75 71.20
vector 92.02 73.41 90.92 74.03
matrix 92.56 78.43 90.99 74.55
 PCA+ matrix 91.95 75.69 90.22 71.79
Table 2: Results of the different normalization methods

The size of pool is a parameter in CNN because input images have arbitrary sizes. However, the classifiers (e.g., SVM or soft-max) require fixed length vectors. Thus, all the deep descriptors of an image must be pooled to form a single vector. We use the Fisher Vector (FV) to encode the deep descriptors.

We denote the parameters of the GMM with components by , where , and are the mixture weight, mean vector and covariance matrix of the Gaussian component, respectively. The covariance matrices are diagonal and

are the variance vectors. Let

be the soft-assignment weight of with respect to the -th Gaussian, the FV representation corresponding to and are presented as follows [14]:


Note that, and are both -dimensional vectors. The final Fisher Vector is the concatenation of the gradients and for all Gaussian components. Thus, FV can represent the set of deep descriptors with a -dimensional vector. In addition, the Fisher Vector is improved by the power-normalization with the factor of 0.5, followed by the vector normalization [14].

We will further study how to choose a proper size for FV in Sec. 4.

3.3 Deep spatial pyramid

The proposed method is named as DSP (Deep Spatial Pyramid), since adding spatial pyramid information is the key part of DSP. Adding spatial information through a spatial pyramid [12] have been shown to significantly improve image recognition performance when dense SIFT features are used. How can we efficiently and effectively utilize the spatial information with fully convolutional activations?

The SPP-net method [9] adds a spatial pyramid pooling layer to deep nets, which has improved recognition performance. However, since we are using FV to pool activations from a fully convolutional network, a more intuitive and natural way exists.

As previously discussed, one single cell (deep descriptor) in the last convolutional layer corresponds to one local image patch in the input image, and the set of all convolutional layer cells form a regular grid of image patches in the input image. This is a direct analogy to the dense SIFT feature extraction framework. Instead of a regular grid of SIFT vectors extracted from

local image patches, a grid of deep descriptors are extracted from larger image patches by a CNN.

Thus, we can easily form a natural deep spatial pyramid by partitioning an image into sub-regions and computing local features inside each sub-region. In practice, we just need to spatially partition the cells of activations in the last convolutional layer, and then pool deep descriptors in each region separately using FV. The operation of DSP is illustrated in Fig. 3.

Figure 3: Illustration of the level 1 and 0 deep spatial pyramid.

The level 0 simply aggregates all cells using FV. The level 1, however, splits the cells into 5 regions according to their spatial locations: the 4 quadrants and 1 centerpiece. Then, 5 FVs are generated from activations inside each spatial region. Note that the level 1 spatial pyramid we use is different from the classic one in [12]. We follow Wu and Rehg [22] to use an additional spatial region in the center of the image. A DSP using two levels will then concatenate all 6 FVs from level 0 and level 1 to form the final image representation.

This proposed DSP method is summarized in Algorithm 1.

1:  Input:
2:   An input image
3:   A pre-trained CNN model
4:  Procedure:
5:   Extract deep descriptors from using the pre-defined model,
6:   For each activation vector , perform matrix  normalization

   (Estimate a GMM

using the  training set);
8:   Generate a spatial pyramid for
9:   for all
13:   end for
14:   Concatenate , , to form the final  spatial pyramid representation
16:  Output: .
Algorithm 1 The DSP pipeline

3.4 Multi-scale DSP

In order to capture variations of the activations caused by variations of objects in an image, we generate a multiple scale pyramid, extracted from different rescaled versions of the original input image. We feed images of all different scales into a pre-trained CNN model and extract deep activations. In each scale, the corresponding rescaled image is encoded into a -dimensional vector by DSP. Therefore, we have vectors of -dimensions and they are merged into a single vector by average pooling, as


where is the DSP representation extracted from the scale level . Finally, normalization is applied to . Note that each vector is already normalized, as shown in Algorithm 1.

The multi-scale DSP is related to MPP proposed by Yoo et al. [27]. A key different between our method and MPP is that encodes spatial information while MPP does not.

4 A small is better in FV in DSP

Figure 4: Plot of values in DSP. For each of the seven datasets used in our experiments, we vary the numbers of Gaussian components to be 64 or 256. (a) and (b) are plots for the Caltech-101 data set, with being 64 and 256, respectively. The meaning of other plots can be deduced from their captions similarly. Note that, the plots for Scene15 are not similar to other plots. When is larger than 4, DSP could achieve satisfactory classification accuracy rates in Scene 15, a trend that is consistent with the plots shown in (g) and (h).
(a) Caltech-101 and Scene15
(b) Stanford40 and Indoor67
Figure 5: Classification performance of DSP and Ms-DSP with different numbers of Gaussians

In this section, we will discuss one key character of DSP, i.e., the number of GMM’s components.

Our experiments show that in DSP, when the number of GMM’s components is small (e.g., from 1 to 4), it will achieve satisfactory classification performances. In fact, when different are used, the highest recognition accuracy is usually achieved by setting to 1 or 2!

This phenomenon is not consistent with common practices in image classification by using local descriptors via the FV encoding. When deep learning features are used together with FV, a large value is also used. Moreover, Yoo et al. [27] specified the value of to be 256 when they trained their visual vocabulary. More previous examples of large values can be found in Table 1. Having a small value is very beneficial in terms of CPU and storage costs, however, why is DSP requiring a small ?

We believe the answer is because DSP uses a small number of deep descriptors per image, i.e., is a small integer. We usually extract no more than 100 512-dimensional deep descriptors from the last convolutional layer from one image, while [27] represented one image with 4,410 vectors of 4,096 dimensional dense CNN activations. If the value of is specified as a large number (e.g., 128 or 256), the resulting FV representation will be problematic.

First, if a large is used in DSP, there will not be enough deep descriptors to estimate an accurate GMM model, because each training image will only contribute few number of deep descriptors. An inaccurate GMM model will adversely affect the classification performance seriously. Second, many FV components will only contain zeros, because there are more Gaussian components than CNN descriptors. We conjecture that this will cause FV to lose accuracy.

We also empirically study this phenomenon. As shown in Fig. 4, we plot distribution of GMM components’ priors (i.e., ) in DSP. There are 14 plots for the 7 datasets used in our experiments. Two plots are shown for each data set, which corresponds to different number of GMM components (shown as the horizontal axis), i.e., 64 and 256. The vertical axis shows the value of for each Gaussian component.

It is obvious to find that: for most datasets, one or two values are much larger than the rest. For example, when in the SUN 397 dataset, the two tall bars indicate that two values are above 0.3, and their sum is around 0.7. In other words, only 2 Gaussian components are responsible for more than 70% of the variations of the distribution. The rest 30% might be related to noisy or background image patches. Thus, might be the best choice in this particular case. In most datasets, we can observe the same phenomenon: one or two Gaussian components are dominating the entire distribution. This observation might explain why DSP just needs a small number of Gaussian components. Since a small value of in DSP will cause a much lower computational cost, it is efficient to handle large scale image classification tasks.

We further evaluate the impact of in DSP and multiple scale DSP (Ms-DSP). We show the classification results in Fig. 5 as a function of the number of Gaussians (i.e., ) of the GMM, and is increased by a factor of 2. A smaller (e.g., ) always obtains better classification performance for DSP and Ms-DSP. With the increasing of , we can see that DSP and Ms-DSP lead to a drop in the discriminative ability. DSP or Ms-DSP feature vector may be too sparse when is increased, which is detrimental to classification. When , a DSP representation has only dimensions. The entire DSP pipeline (from reading in an image till emitting a prediction) requires on average 0.15 second per image.

For a fixed , Ms-DSP always significantly outperforms DSP. This is not surprising since, for a given , Ms-DSP captures more information from rescaled images, which DSP does not have access to.

5 Experiments

The purpose of this section is to evaluate the performance of DSP as a complete pipeline. We report results in three object recognition datasets, Caltech-101 [5], Caltech-256 [8] and Pascal VOC 2007 [3], and three scene recognition datasets, Scene15 categories [12], MIT Indoor67 [15] and SUN397 [16], and one action recognition data set, Stanford40 [26]. Except for Pascal VOC 2007 and MIT Indoor67 which have fixed training and test splittings, all experiments on the other datasets are repeated as the average of three randomly sampled train/test splittings.

5.1 Datasets

Caltech-101 [5] contains 9K labeled images of 101 object categories and a background category. We follow the procedure of  [5] and randomly select 30 images per category for training and test on up to 50 images per class in every split. Caltech-256 [8] with 31K images and 257 classes is an improvement of Caltech-101. Following  [8], each split contains 60 training images per class and the rest is used for test. For PASCAL VOC 2007 which contains 20 object classes, we use its standard protocol and measure the average precision (AP) and report the mean AP (mAP) of 20 categories.

Scene15 is composed of 15 different kinds of scenes, where each category has 200 to 400 images. We randomly select 100 images per class for training and the rest for test, following [30]. MIT Indoor67 [15] is a challenging indoor data set comparing with outdoor scene recognition. The dataset has 15,620 images with 67 indoor scene categories. The standard split [15] for this dataset consists of 80 training and 20 test images per category. SUN397 [24] is the largest data set for scene recognition. It contains 397 categories and each category has at least 100 images. The training and test splits are fixed and publicly available from [24], where each split has 50 training and 50 test images per category. We select the first three splits from the 10 public splits in our experiments.

Stanford40 [26] contains 40 diverse daily human actions and with 180300 images for each category. In each splitting, we randomly select 100 images in each class for training and the remaining for test.

In our experiments, average accuracy rate is used to evaluate the classification performances on Caltech-101, Caltech-256, MIT Indoor67, Scene15, SUN397, and Stanford40. For PASCAL VOC 2007, we employ mean average precision (mAP) to evaluate our proposed method and other approaches.

5.2 Experiment details

In our DSP, VGG Net-D  [17] is employed as the pre-trained CNN model to extract deep activations. For simplicity, pre-trained CNN model weights are kept fixed without fine-tuning. Note that, we just employ VGG Net-D without its fully connected layers in our experiments, thus can accept input images of arbitrary sizes. Input images do not need to be resized into a fixed aspect ratio. However, considering running efficiency, an image is resized such that the smallest and largest edge of input image will not be lower than 224 or higher than 1120, respectively. In addition, each image is preprocessed by subtracting the per-pixel mean (of the ImageNet images and provided along with the CNN model).

We use in FV in this section. An image is represented by the concatenation of FVs from all the sub-blocks in a two level deep spatial pyramid. For using multi-scale, the rescaled images are times of the of original input image, where . the FVs of all five scale are merged into a single vector by average pooling as Eq. 3.

One-versus-rest linear SVM is used for classification. Following [30], all classifiers use the same parameters for fair comparisons. Our experiments use the following open source libraries: VLFeat [20], MatConvNet [21] and LIBLINEAR [4].

5.3 Main results

Methods Description Caltech-101 Caltech-256 VOC 2007 Scene15 SUN397 MIT Indoor67 Stanford40
SoA [9] 93.420.50 - 82.44 - - - -
[7] - - - - 51.98 68.88 -
[27] - - 82.13 - - 77.56 -
[30] 84.790.66 65.060.25 - 91.590.48 53.860.21 70.80 55.280.64
[1] 88.350.56 77.610.12 82.4 - - - -
[17] 92.70.5 (*) 86.20.3(*) 89.7 - - - -
Baseline Fc 90.550.31 82.020.12 84.61 89.880.76 53.900.45 69.78 71.530.34
Pool+FV 90.030.75 79.480.53 88.12 89.000.42 51.390.51 71.57 73.960.52
Our DSP 94.660.26 84.220.11 88.60 91.130.77 57.270.34 76.34 79.750.34
Ms-DSP 95.110.26 85.470.14 89.31 91.780.22 59.780.47 78.28 80.810.29
Table 3: Recognition accuracy (or mAP) comparisons on seven datasets. The highest accuracy (mAP) of each column is marked in bold. [17]’s results were achieved using VGG Net-D and VGG Net-E, evaluation was measured by mean class recall on Caltech-101, Caltech-256 instead of accuracy .
 Methods Description aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv
 Baseline Fc 96.27 90.81 93.81 92.40 58.24 86.01 90.92 91.91 69.45 78.08 79.36 90.87 91.69 88.98 95.35 61.31 88.14 71.68 96.53 80.28
Pool+FV 97.23 94.44 96.12 93.54 70.99 88.45 93.43 95.48 71.16 81.33 82.21 93.55 95.08 90.51 97.64 69.84 88.70 77.42 96.92 88.29
Our DSP 97.45 94.12 96.79 94.98 69.64 87.99 93.28 95.76 72.75 81.65 85.07 94.31 94.84 91.57 97.53 69.61 89.42 80.14 97.47 87.64
Ms-DSP 97.67 95.24 96.84 94.47 70.58 89.32 93.50 95.92 74.61 83.99 85.68 95.27 95.37 92.02 97.42 71.05 90.82 80.57 97.69 88.14
Table 4: Per-class classification performance on PASCAL VOC 2007.

State-of-the-art and two baseline results are reported in Table 3. In particular, the first baseline method is fc which is extracted from the last fully connected layer. To extract fc feature, we resize the image so that its resolution is . -normalization is applied to the fc activations before employing SVM, which was suggested in [1]. The other baseline is the poolFV where deep descriptors are aggregated to single vector by orderless FV pooling. In order to compare fairly, we use the same resolution of input image as in our DSP.

On most datasets, fc already performs well. Pool produces quite good results even though the Pool activations are computed using only 10 of the CNN parameters of the complete CNN model, which shows that fully convolutional features (with small in FV and matrix normalization) are powerful, especially on VOC2007 (84.61 88.12) and Stanford40 (71.53 73.96).

DSP and multi-scale DSP can significantly outperform baseline and state-of-the-arts methods. Compared to the baselines, DSP improves performance in all datasets by 1–5, especially on SUN397 (53.90%59.27%) and Stanford40 (73.96%79.75%). This gain is mainly due to the fact that DSP can capture the spatial information on top of pool activations. On the other hand, the fully convolutional network relaxes the constraint that the input images must have the same fixed size, thus the full image can be fed into a pre-trained CNN without changing its aspect ratio. Combining multiple scale and DSP (Ms-DSP) achieves the best recognition performance on all datasets. Since fully convolutional and small are used, Ms-DSP is still very efficient.

Our DSP and Ms-DSP can achieve mean recall and on Caltech-101, respectively, and and on Caltech-256, respectively. These results are significantly higher than that of [17] (92.7% for Caltech-101 and 86.2% for Caltech-256).

In addition, on the VOC2007 dataset, our best performance is slightly lower () than that in [17]. However, [17] used fusion feature which was computed using two pre-trained CNN (, VGG Net-D and VGG Net-E). Detailed VOC results in Table 4 show that our methods are better than fc in every category.

6 Conclusion

In order to present a powerful deep feature representation, details have to be made right. In other words, decisions for important factors must be carefully studied and made. In this paper, we picked a list of 5 important factors and provided our answers to them. The main findings of this paper form a complete pipeline DSP (deep spatial pyramid), which integrates the following components: activations from the last convolutional layer, naturally processing input image of any size instead of fixed size, dense deep features extracted from multiple scales, and most importantly, a natural way to build a spatial pyramid in deep learning. DSP, in spite of being simple and efficient, has excellent performance in many benchmark datasets.

In particular, we emphasize the following new findings.

  • Normalization: matrix normalization is more effective than unnormalized or vector normalization.

  • DSP: DSP can effectively capture the spatial information in a natural and efficient manner.

  • size in FV: Pooling deep descriptors only need small number of Gaussian components in the Fisher Vector, which leads to lower computational costs.

Other factors and details can be further considered in the DSP framework, which we will study in the future. For example, convolutional activations from multiple layers (cross-layer [13]) might further improve classification accuracy. And VLAD might be a better fit than FV for aggregating deep convolutional activations [25].


  • [1] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In BMVC, 2014.
  • [2] M. Cimpoi, S. Maji, and A. Vedaldi. Deep convolutional filter banks for texture recognition and segmentation. CVPR, 2015.
  • [3] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results, 2007.
  • [4] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. JMLR, 9:1871–1874, 2008. Software available at
  • [5] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. CVIU, 106(1):59–70, 2007.
  • [6] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, pages 580–587, 2014.
  • [7] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In ECCV, pages 392–407, 2014.
  • [8] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset, 2007.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, pages 346–361, 2014.
  • [10] H. Jégou, M. Douze, C. Schmid, and P. Pérez. Aggregating local descriptors into a compact image representation. In CVPR, pages 3304–3311, 2010.
  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
  • [12] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, volume 2, pages 2169–2178, 2006.
  • [13] L. Liu, C. Shen, and A. van den Hengel. The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification. In CVPR, 2015.
  • [14] F. Perronnin, J. Sánchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In ECCV, pages 143–156, 2010.
  • [15] A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, pages 413–420, 2009.
  • [16] J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the fisher vector: Theory and practice. IJCV, 105(3):222–245, 2013.
  • [17] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [18] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In ICCV, volume 2, pages 1470–1477, 2003.
  • [19] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv:1409.4842.
  • [20] A. Vedaldi and B. Fulkerson.

    VLFeat: An open and portable library of computer vision algorithms, 2008.

    Software available at
  • [21] A. Vedaldi and K. Lenc. MatConvNet: Convolutional neural networks for MATLAB, 2014. Software available at
  • [22] J. Wu and J. M. Rehg. CENTRIST: A visual descriptor for scene categorization. PAMI, 33(8):1489–1501, 2011.
  • [23] J. Wu, Y. Zhang, and W. Lin. Towards good practices for action video encoding. pages 2577–2584, 2014.
  • [24] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition from abbey to zoo. In CVPR, pages 3485–3492, 2010.
  • [25] Z. Xu, Y. Yang, and A. G. Hauptmann. A discriminative CNN video representation for event detection. 2015.
  • [26] B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. Guibas, and L. Fei-Fei. Human action recognition by learning bases of action attributes and parts. In ICCV, pages 1331–1338, 2011.
  • [27] D. Yoo, S. Park, J.-Y. Lee, and I. S. Kweon. Fisher kernel for deep neural activations. arXiv:1412.1628, 2014.
  • [28] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, pages 818–833, 2014.
  • [29] M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level feature learning. In ICCV, pages 2018–2025, 2011.
  • [30] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In NIPS, pages 487–495, 2014.