A Feature Embedding Strategy for High-level CNN representations from Multiple ConvNets

05/11/2017 ∙ by Thangarajah Akilan, et al. ∙ 0

Following the rapidly growing digital image usage, automatic image categorization has become preeminent research area. It has broaden and adopted many algorithms from time to time, whereby multi-feature (generally, hand-engineered features) based image characterization comes handy to improve accuracy. Recently, in machine learning, pre-trained deep convolutional neural networks (DCNNs or ConvNets) have been that the features extracted through such DCNN can improve classification accuracy. Thence, in this paper, we further investigate a feature embedding strategy to exploit cues from multiple DCNNs. We derive a generalized feature space by embedding three different DCNN bottleneck features with weights respect to their Softmax cross-entropy loss. Test outcomes on six different object classification data-sets and an action classification data-set show that regardless of variation in image statistics and tasks the proposed multi-DCNN bottleneck feature fusion is well suited to image classification tasks and an effective complement of DCNN. The comparisons to existing fusion-based image classification approaches prove that the proposed method surmounts the state-of-the-art methods and produces competitive results with fully trained DCNNs as well.



There are no comments yet.


page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The traditional classification models using single feature representation suffers from the inability to tackle intra-class variations and global variants such as color, lightings and orientation of image statistics. Therefore, it is an intuitive process to fuse multiple features to meliorate the classification accuracy because multiple features can plausibly create a well generalized feature space. Researchers in the computer vision community also have shown interest in multiple feature fusion.

For example, Li et al. [1] utilized the Riemann manifold to combine the features from the covariance matrix of multiple features and concatenated multiple features to represent the object appearance. Meanwhile, Park [2]

took the Multi-partitioned feature-based classifier (MPFC) to fuse features such as Hue-saturation-value(HSV), Discrete cosine transformation (DCT) coefficients, Wavelet packet transform (WPT) and Hough transform (HT) with specific decision characteristic expertise table of local classifiers. Similarly, Kwon

et al. [3]

had advantage of multiple features for efficient object tracking, where, they dissevered the task into multiple constituents and combined multiple features through sparse Principal component analysis (PCA) to select the most important features, by which, the appearance variations were captured.

On the other hand, researchers in [4], [5], [6], [7] also found different ways to merge multiple hand-engineered-features to improve classification accuracy. Fernando et al. [4] merged Hue-histograms, Color name (CN) descriptors, Scale-invariant feature transform (SIFT) and Color-SIFT, while, Gehler and Nowozin [5] achieved some success of improving classification accuracy by means of combining the basic SIFT feature with another eight different features: Histogram of gradients (HOG), Local binary pattern (LBP), Color-SIFT and so forth using Multiple kernel learning (MKL) to combine 49 different kernel matrices. Khan et al. [6] employed multiple cues by individually processing shape and color cues then combining them by modulating the SIFT shape features with category-specific color attention. They used a standardized multi-scale grid detector with Harris-laplace point detector and a blob detector to create feature description, then they normalized all the patches to a predefined size and computed descriptors for all regions. Dixit et al. [7]

embedded features from a CNN with Semantic fisher vector (SFV), where the SFV is ciphered as parameters of a multi-nominal Gaussian mixture FV.

In the aforesaid literature, however, the features fused are mainly the hand-engineered features or such features with bottleneck features111The high-level feature representations of ConvNet that is feed into a final classification layer is called bottleneck features. from a single CNN. Hence, utilizing the bottleneck features extracted through an off-the-shelf pre-trained CNN, significantly, outperforms a majority of the baselines state-of-the-art methods [8]

. Thus, one may ponder the following questions: (i) If multiple CNN features extracted from different networks, can such features be complementary?, if so (ii) what can be an acceptable approach to fuse them so that the classification accuracy will improve? We address these questions by carrying out experiments on various data-sets with three different pre-trained CNNs as feature extractors, weights based on cross-entropy loss function as feature embedding scheme and softmax as classifier. The experiment results have strengthen our idea of fusing multiple CNN features to improve image classification accuracy.

1.1 CNN as Feature Extractor

A DCNN pre-trained on large image data-set can be exploited as generic feature extractor through transfer learning process [9]. Generally, in transfer learning, parameters (weights and biases) of first layers of source (pre-trained DCNN) are transferred to the first layers of target (new task) network and left without updates during training on new data-set, while the rest of the layers known as adaptation layers of target task are randomly initialized and updated over the training. If a fine-tuning strategy is taken then back-propagation process will be carried out through the entire (copied + randomly initialized layers) network for calibrating the parameters of the copied layers in the new network so that the DCNN responses well to the new task.

In this experiment, we take three pre-trained networks: AlexNet, VGG-16, and Inception-v3 and extract features from their respective penultimate layers. These networks have been trained on ImageNet

222It contains more than 14 million images which are hand labeled with the presence/absence of 21000+ categories.

, where the final logits layer of each network has 1000 output neurons. That final layer is decapitated, then rest of the DCNN is employed as fixed feature extractor on the new data-sets, where number classes per data-set may differ. The following intermezzo highlights the properties of the DCNNs.


is the winner of 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) with 37.5% and 17.0% top-1 and top-5 object classification error rates respectively. It subsumes 5 convolutional (Conv) layers occasionally interspersed with max-pooling layers, 3 fully-connected (FC) layers and the last softmax classifier with 1000 output neurons trained on 1.2 million images in the ImageNet-2010 data-set. The penultimate layer referred as

FC7 has 4096 output channels. VGG-16[11] is the winner of 2014 ILSVRC challenge for localization task with 25.3% error and runner-up of the classification task with 24.8% and 7.5% top-1 and top-5 error rates respectively. It has 16 Conv layers with maxpooling layers after each set of two or more Conv layers, 2 FC layers, and a final softmax output layer. The penultimate layer FC2 has 4096 channels of output. Inception-v3[12] is an improved version of GoogLeNet the winner of 2014 ILSVRC classification task. It achieved 21.2% top-1 and 5.6% top-5 error rates on the benchmark ILSVRC 2012 classification challenge validation set. We extract features of target data-sets from a maxpooling layer named as pool_3:0 in the network, which has 2048 output channels.

Rest of this paper is organized as follows. Section 2 expatiates on the main ideas: feature extraction, feature embedding and classification via block diagrams and mathematical derivations. Section 3 details the experimental results through quantitative and qualitative analysis. Finally, Section 4 concludes the work with final remarks on future directions.

2 System Overview

As described in Section 1.1

, using the selected CNN models and their associated learned parameters a forward-pass operation (without back-propagation) is carried out on the image statistics of new data-sets to extract bottleneck features. Depends on the size of the data-set, feature extraction process may take several hours; however, it will be considerably little time than training or fine-tuning the CNN completely. For instance, on a Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz machine with 16.0GB RAM, it would take about 5-6 hours to get the features from CIFAR10 data-set through Inception-v3.

Figure 1: The Image Classification System Overview (ConvNet refers to CNN).

2.1 Feature Embedding

As we exploit three different CNNs for feature extraction as shown in Figure 1, the system must be devised with an appropriate strategy to merge the extracted features toward classification accuracy gain. The basic approach is concatenating all different features in a single vector per sample as , thus the final feature space has the dimension of . Although, such straight forward concatenation process often improve classification accuracy than using single feature, the penalty is unfair since a weak feature may deteriorate the performance of other good features. We circumvent that by introducing weighted feature embedding layer as shown in Figure 2, where we calculate cross-entropy loss for each feature individually and update their assigned parameters using softmax function and gradient descent based optimizer to minimize the cross-entropy loss. On the other hand, this layer functions as indemnifier for the variant image statistics like imaging conditions, viewpoints and object types of the source and target data. The following snippet describes the mathematical background of the technique.

The softmax function produces a categorical probability distribution, when the input is a set of multi-class logits as:


where input is -dimensional vector and output is also a -dimensional vector having real values in the range and that add up to 1 as normalization happens via the sum of exponents term dividing actual exponentiation term. The cost function for the softmax function of the model can be written in terms of likelihood maximization with a given set of parameter as:


where the likelihood can be deduced to a conditional distribution of and for the same as:


Note that the probability that the class for a given input and with can be written in matrix form as:


where is the probability that the class is given that the input is . Eventually, the cost function through maximizing the likelihood can be done by minimizing the negative log-likelihood as:


where denotes the cross-entropy error function. Then, the derivative of the cost function with respect to the softmax input can be used to update the weights as:



the learning rate tells us how quickly the cost changes the weights. In the same way, biases can also be updated; towards the goal of bringing the error function to local minimum. In this work, we utilize the backpropagation (aka backprops) based on gradient descendant optimization algorithm to update the weights and biases. The gradient decent algorithm is the workhorse of learning in neural networks, these days. Intricate description of backprops can be referred from

[13]. Thus, we get dimension reduced logits of the Alex, VGG, and Inception bottleneck features respectively as shown in Figure 2.

Figure 2: Weighted Cross-entropy Based Feature Embedding.

Sequentially, the estimated logits are coalesced by a product

and fed in into the final classification layer.

3 Experimental Results

Type Data-set Proposed AlexNet VGG-16 Ince.-v3 Other methods
Object classification CIFAR10 92.00 81.60 85.35 89.57 91.87[14], 85.02[15], 74.5[16]
CIFAR100 74.60 56.30 67.26 69.86 72.60[17], 66.64[14]
Caltech101 95.65 90.15 91.31 93.57 83.60[2], 82.10[5], 76.1[6]
Caltech256 87.30 69.22 79.30 83.75 60.97[7], 50.80[5]
MIT67 77.38 53.88 66.41 76.04 70.72[18], 65.10[7]
Sun397 55.22 45.18 47.87 49.41 54.30[18], 38.00[19]
Action classification Pascal VOC 2012 82.50 63.39 71.13 79.98 70.20[9], 69.60 OXFORD[20]
Table 1: Comparison of the results (top-1 accuracy in %).

Experiments were carried out on 6 different object classification data-sets: CIFAR-10, CIFAR-100 [21], MIT67 [22] Caltech101, Caltech256 333http://www.vision.caltech.edu/Image_Datasets/Caltech101/, Sun397 444http://groups.csail.mit.edu/vision/SUN/ and an action classification data-set the Pascal VOC 2012 [20]. Three statistics from each data-set is shown in Figure 3 while Table 2

summarizes all the data-sets. In Pascal VOC 2012, as the action boundaries were given we extracted the action statistics within the boundaries and zero padded to make their dimension spatially square and resized to meet the requirement of the employed CNN architectures. For other data-sets, whole size images were taken and only resized to meet the networks’ input layer requirements.

The results of the proposed bottleneck feature embedding are compared in Table 1 with existing algorithms. The Table also lists the performance of single CNN bottleneck feature without any feature fusion for quantitative analysis, while Figure 4 shows an overall performance comparison in terms of box-plot of the fused feature with the best results of other methods chosen from Table 1. From these comparisons one can understand that the proposed feature embedding has improved the classification accuracy by 1% - 2% most of the cases without any data-augmentation.

Figure 3: Illustration of different data-set statistics: (a). Pascal VOC 2012 (riding horse, using computer, ridding bike), (b). Sun397 (airplane cabin, cafeteria, dam), (c). MIT67 (bakery, children room, museum), (d). Caltech101 (wrench, strawberry, wild cat), (e). Caltech256 (baseball bat, calculator, firetruck), (f). CIFAR10 (dog, horse, airplane), (g). CIFAR100 (insects, household furniture, large natural outdoor scenes).
Data-set No. of classes Train. samples Test samples Ref.
CIFAR10 10 50,000 10,000 [21]
CIFAR100 100 50,000 10,000 [21]
Caltech101 101 6,076 2,601 [23]
Caltech256 256 21,363 9,146 [24]
MIT67 67 5,360 1,340 [22]
Sun397 397 59,550 10,919 [19]
Pascal VOC 10 4,588 4,569 [20]
Table 2: Summary of the data-sets.

Note that in Table 1, [14] uses Data-augmentation + latent model ensemble with single CNN feature; [15], [16] and [17] do not use any feature fusion; [2], [5], [6], [7] and [19] use feature fusion of multiple hand-crafted features or hand-crafted feature(s) with a single CNN feature; [18] uses CNN features extracted though pre-trained AlexNet on Places205/365, similarly [9] also uses CNN features extracted by using a pre-trained AlexNet on 1512 classes of ImageNet (in our case, the AlexNet used is pre-trained on 1000 classes of ImageNet).

Figure 4: Performance comparison.

4 Conclusion

An approach to fuse bottleneck features of multiple CNNs through weighted cross-entropy is presented, where a set of three different pre-trained CNNs are exploited as feature extractors. The test results on various data-sets show that it outperforms the state-of-the-art hand-crafted feature fusion methods and produces very competitive results to fully trained (data-set specific) DCNN, as well. It accords with our hypothesis that features from multiple CNNs can be complementary to each other and fusion of them can be a generalized representation of images that is appearance invariant.

Although, the proposed feature embedding enhances the classification accuracy, how to fuse multiple features is still an open problem. In this work, our goal is to analyze if the accuracy improves when multiple CNN bottleneck features are fused as proposed. As for the future work, metric learning approaches can be exploited to capture facet in the CNN features that to differentiate classes and inter-classes. Hence, this work can be extended for dynamic texture and video activity detection and classification, as well.


  • [1] X. Li, W. Hu, Z. Zhang, and X. Zhang, “Robust visual tracking based on an effective appearance model,” Computer Vision - ECCV 2008: 10th European Conference on Computer Vision, pp. 396–408, 2008.
  • [2] D.-C. Park, “Multiple feature-based classifier and its application to image classification,” IEEE International Conference on Data Mining Workshops, pp. 65–71.
  • [3] J. Kwon and K. M. Lee, “Visual tracking decomposition,” in CVPR, pp. 1269–1276, 2010.
  • [4] T.-B. Fernando, E. Fromont, D. Muselet, and M. Sebban, “Discriminative feature fusion for image classification,” International Conference on Pattern Recogni. (ICPR), pp. 3434–3441, 2012.
  • [5] P.-V. Gehler and S. Nowozin, “On feature combination for multiclass object classification,” in ICCV, 2009.
  • [6] F.-S. Khan, J. van de Weijer, and M. Vanrell, “Modulating shape features by color attention for object recognition,” International Journal of Computer Vision (IJCV), vol. 98, pp. 49–64, 2012.
  • [7]

    M. Dixit, S. Chen, D. Gao, N. Rasiwasia, and N. Vasconcelos, “Scene classification with semantic fisher vectors,” in

    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pp. 2974–2983, June 2015.
  • [8] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “Cnn features off-the-shelf: An astounding baseline for recognition,” in CVPR Workshops, June 2014.
  • [9] M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’14, pp. 1717–1724, IEEE Computer Society, 2014.
  • [10] A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in In Advances in Neural Information Processing Systems, pp. 1097–1105, 2012.
  • [11] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations, vol. abs/1409.1556, 2014.
  • [12] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” CoRR, vol. abs/1409.4842, 2014.
  • [13] A.-N. Michael, “Improving the way neural networks learn,”

    Neural Networks and Deep Learning

    , 2015.
    Determination Press.
  • [14] M. Sun, T.-X. Han, L. M.-C. Xu, X., and K. Ahmad Khodayari-Rostamabad, “Latent model ensemble with auto-localization,” in Proceedings of the 23nd International Conference on Pattern Recognition (ICPR-16), 2016.
  • [15] J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” Advances in Neural Information Processing Systems, vol. 25, pp. 2951–2959, 2012.
  • [16] K. Yu and T. Zhang, “Improved local coordinate coding using local tangents,” ICML, 2010.
  • [17] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M. Patwary, M. Prabhat, and R. Adams, “Scalable bayesian optimization using deep neural networks,” in JMLR Workshop and Conference Proceedings, pp. 2171–2180, 2015.
  • [18]

    B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene recognition using places database,” in

    Advances in Neural Information Processing Systems, vol. 27, pp. 487–495, Curran Associates, Inc., 2014.
  • [19] J. Xiao, K. Hays, A. Ehinger, and A. Torralba, “Sun database: Large-scale scene recognition from abbey to zoo,” in CVPR, pp. 3485–3492.
  • [20] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” International Journal of Computer Vision (IJCV), vol. 111, no. 1, pp. 98–136, 2015.
  • [21] A. Krizhevsky, “Learning multiple layers of features from tiny images,” 2009.
  • [22] A. Quattoni and A. Torralba, “Recognizing indoor scenes,” in CVPR, 2009.
  • [23] L. Fei-Fei, L.-R. Fergus, and P. Perona, “Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories,” in CVPR, 2004.
  • [24] G. Griffin, A. Holub, and P. Perona, “The caltech-256: Caltech technical report,” vol. 7694, 2007.