1 Introduction
In recent years, image classification, as an important part of image processing problems, has been a research focus in computer vision. Support Vector Machines (SVMs) using Spatial Pyramid Matching (SPM) have been highly successful in image classification. SPM model, as a feature extraction method, is extended from bagofwords(BoW) model, which is a document representation method in the field of information retrieval. This model treats documents as some keywords combinations, ignores the syntax and sequence of the texts, finally matches the documents with the frequencies of keywords. Researchers have applied this method onto image processing and turned up the bagoffeatures(BoF), which represents an image as a histogram of its local features.
Unfortunately, the method is incapable of capturing shapes or locating an object, because of this attribute discarding the spatial order of local features. In order to overcome this issue, spatial pyramid matching (SPM) is proposed, as the most successful extension of the BoF model by incorporating the geometric correspondence search, discriminative codebook learning and the generative part model. The SPM method partitions an image into segments in different scales , and computes the BoF histogram to form a vector representation of the image. In cases where only the scale , SPM reduces to BoF. Due to the utility of the spatial information, SPM method is more efficient than BoF and also has shown very promising performance on many image classification issues.
Figure 1 is a typical flowchart that clearly illustrates the SPM approach. The (a) part is the flow chart of the linear SPM methods. The flow contains SIFT extraction, sparse coding, spatial pooling and classification. The (b) part is our flowchart that is based on sparse coding model. Firstly, in the descriptor layer, feature points like SIFT descriptors can be extracted from the input image. Then a codebook is applied to quantize each descriptor and obtain the code layer. In the next SPM layer, multiple codes from inside each subregion are pooled together by averaging and normalizing into a histogram. Finally, the histograms from all subregions are concatenated together to generate the final representation of the image for the following classification task.
It has been empirically found that, traditional SPM has to use classifiers with a particular type of nonlinear Mercer kernels, e.g. the intersection kernel or the Chisquare kernel for achieve good performance. Accordingly, the cost of nonlinear classifiers, bearing
, computational complexity in training and O(n) for testing in SVM( is the number of support vectors), are very expensive. So it’s impractical for large scale realworld applications.To settle this issue, Yang et al[1] proposed a method called linear spatial pyramid matching using sparse coding(ScSPM), which achieves stateoftheart performance on several databases in image categorization experiments. In the following years, some improved methods based on ScSPM are introduced, like LLC[2] and LRScSPM[6], which mainly take into account the features’s locality in the feature quantization process based on sparse coding to improve the accuracy of image classification.
In this paper, we propose an improved image classification algorithm based on ScSPM, which can apply nonconvex and nonnegative properties admirably to image sparse representation to improve the accuracy of image classification. It is well known that there are two fundamental steps in image representation. One is coding, where sparse coding is widely used now, and the other is spatial pooling, which can mainly get the feature representation of images. However, traditional sparse coding followed multiscale max pooling has no constraints on the sign of coding coefficients. In order to avoid the loss of some information loss in the following max pooling, the nonzero components are conditioned to nonnegative in our sparse coding model, as did in
[3]. Considering that the sparsity of images representations is also essential[1, 2, 6], we employ nonconvex algorithmISD[5], in order to get more sparse representations. In sum, we propose nonconvex and nonnegative sparse coding and we call the improved ScSPM as NNScSPM.The remainder of the paper is organized as follows. Section introduces the basic idea of of ScSPM and describes our proposed NNScSPM. Section presents experiment results. Finally, section concludes our paper.
2 Linear SPM Using SIFT Nonconvex and Nonnegative Sparse Codes
There are two basic feature extraction methods in image classification: vector quantization and sparse coding. In this section, we introduce these two models firstly, and then we propose our model and algorithm.
2.1 Vector Quantization (VQ)
Let be a set of SIFT appearance descriptors in a dimensional feature space, i.e.
. The vector quantization(VQ) method applies the Kmeans algorithm
[11] to solve the following problem:(1) 
where are the cluster centers to be found, called codebook or dictionary, and denotes the norm of vectors. The optimization problem can be rewritten into a matrix factorization problem with cluster membership indicators :
where is a cardinality constraint, meaning that only one element of is nonzero; means that all the elements of are nonnegative, and is the norm of , i.e., the summation of the absolute value of each element in . After the optimization problem (2.1) is solved, the index of the only nonzero element in indicates which cluster the vector belongs to.
The objective of Eq.(2) is nonconvex and the minimization is always alternatively performed with respect to the labels with fixed, and with respect to with the labels fixed[11]. This phase is called training. After that, the coding phase will be going. The learned will be applied or tested on a new set of and in such cases, the problem (2.1) will be solved with respect to only.
2.2 Sparse Coding
Because of restrictive the constraint , which makes the vector responding to only a element of the codebook, the reconstruction of may be too coarse. To overcome this issue, Kai Yu et al.[1] relaxed the constraint by instead putting an norm regularization on , which enforces to have a small number of nonzero elements. Then the formulation is turned into another problem known as sparse coding (SC):
(3) 
Where is a tradeoff parameter for balancing the fidelity term and the sparse regularization term. In order to avoid trivial solutions, a unit norm constraint on is typically applied. Normally, the dictionary is an overcomplete basis set, i.e. . Solving (3) is similar to that of the problem (2.1) consisting of a training phase and coding phase.
2.3 Our model and algorithm for Sparse Coding
2.3.1 Our Model: Nonconvex and Nonnegative Sparse Coding
Because of less restrictive constraint, SC coding can achieve a much lower reconstruction error than VQ coding. Research in image statistics has clearly disclosed that natural image patches are sparse signals, and the sparse representation can help capture salient properties of images.
The SC model is based on norm, which is a popular sparsity enforcement regularization due to its convexity. However, the nonconvex sparse regularization such as the widely used norm (), prefers an even more sparse solution. Although there have existed many works based on nonconvex penalization for image processing, to our best knowledge, there have existed only few specific works for image classification. The major difficulties with the existing nonconvex algorithms are that the global optimal solution cannot be efficiently computed for now, the behavior of a local solution is also hard to analyze and more seriously the prior structural information of the solution is hard to be incorporated.
SC model in [1] followed by max pooling did not use nonnegative constraint. It’s well known that in order to satisfy the optimization, negative coefficients are usually needed. Because nonzero components typically reflect remarkable feature information, max pooling will bring the loss in terms of these negative components, and moreover reduce the accurate of image classification[3].
Taking those two situation account, this paper proposes a nonconvex and nonnegative sparse model based on the convex sparse coding and proposes a multistage convex relaxation algorithm for it via our proposed iterative support detection[5], which has proved to be more sparse than the pure solution in theory in many cases. Our new sparse model is a truncated model with nonnegative constraints as follows:
where are the weights, i.e. the components of corresponding to weights are removed out of the penalty, and will not be penalized [5].
While ones commonly consider the plain model: , i.e., all the weights are the same and equal to , in practice, we might be able to obtain more prior information beyond the sparsity assumption. For example, we might know the locations of certain coefficients of large nonzero absolute values. In such cases, we should not penalize these coefficients in the norm regularization and remove them out of the norm (corresponding weights being ) and use a truncated norm [5].
2.3.2 Our Algorithm: ISDYAll1 algorithm
The difficulty is that this kind of partial support information is not available beforehand in practice. Correspondingly, in this paper, we propose to take advantage of the idea of the Iterative Support Detection (ISD, for short) in [5] to extract the reliable information about the true solution and set up weights correspondingly. The procedure is an alternating optimization, which repeatedly take the following two steps:
Step 1: Optimize with fixed (initially ): this is a convex problem in .
Step 2: Determine the value of according to the current . The value of weights will be used in Step 1 of the next iteration.
For Step 1, the truncated nonnative norm optimization problems can be efficiently solved via the Yall1 algorithms [10]
. For Step 2, the locations of large nonzeros are estimated from the solution of the last (truncated)
norm optimization problem via the support detection procedure. Our algorithm is named as ISDYall1, and described as follows:Algorithm The Proposed ISDYall1 Algorithm Given X extracted an image and the codebook . 1.Set the iteration and initialize the set of detected entries . 2. while the stopping condition is not satisfied, (a) ; (b) solve truncated minimization for ; (Step 2: using Yall1 method) (c) support detection using as the reference; (Step 1: using thresholdISD strategy) (d) . Here denotes the universal set of . The support detection, i.e. the set of the nonzeros of large magnitudes, are estimated as follows [5].
(5) 
In pooling phase, because of our nonnegative model, the pooling function is defined as max, where is the jth element of , which is used in linear SVM classifier for image classification. is the matrix element at ith row and jth column of , and is the number of local descriptors in the region.
3 experiment results
In this section, we report the comparisons results between our proposed nonconvex and nonnegative model and ScSPM on two widely used public datasets: 15 scenes[19] and UIUCSport dataset[20]. Experiments’s parameters setting will be analyzed in this section. Besides our own implementations, we also quote some results directly from the literature, especially those of ScSPM from [1] and [6]. All the experiments were performed under Windows 7 and Matlab (R2013b) running on a desktop with an Inter(R)CPU i54590(3.3GHZ) and 8GB of memory.
3.1 Parameters Setting
Local features descriptor is essential to image representation. In our work, we also adopt the widely used SIFT feature due to its excellent performance in image classification. To fairly compare with others, we use 50,000 SIFT descriptors extracted from random patches to train the codebook which is same as [1] in the train phase. In sparse coding phase, the most important two parameters are (i): the sparsity regularization parameter . The performance is best in ScSPM[1] when it is 0.20.4. We follow the same setting of the interval (0.2,0.4). (ii): the threshold value, which is used for computing the weights of coefficients. we take the threshold as following:, where the performance is good when is empirically. In our experiments, we compare our results to ScSPM and KSPM, which uses spatialpyramid histograms and Chisquare kernels[1]. NScSPM is the nonnegative sparse model which only use the nonnegative constraint, not use the nonconvex.
3.2 15 Scene Data Set
Scene contains categories and images in all, with to images per category. The image content is diverse, containing not only indoor scene, such as bedroom, kitchen, but also outdoor scene, such as buildings and country etc. To compare with others work, we randomly select 100 images per class as training data and use the rest as test data. The detailed comparison results are shown in the Table 1.
Method  Average Classification Rate() 

KSPM  76.730.65 
ScSPM[1]  80.280.93 
NScSPM  81.300.53 
NNScSPM  81.920.42 
3.3 UIUCSport Data Set
UIUCSport contains 8 categories and 1792 images in all, and the image number ranges from 137 to 250. These 8 categories are badminton, bocce, croquet, polo, rock climbing,rowing, sailing and snow boarding. We also randomly select 70 images from each class as training data and use the rest as test data. We list our results in Table 2.
Method  Average Classification Rate() 

ScSPM  82.850.62 
NScSPM  83.530.72 
NNScSPM  84.130.37 
From the results, we have observed two points that: (i) nonconvex and nonnegative properties can play an important role on image classification indeed and (ii) our proposed NNScSPM is superior than KSPM and ScSPM on 15 scene dataset and UIUCSport dataset.
4 concludes
We propose a nonconvex and nonnegative sparse coding model for image classification in this paper, which is efficiently solved by proposed ISDYall1 algorithm. The nonconvex property reflects the sparsity of the image and the nonnegative property can avoid the loss in max spatial pooling. Our numerical experiments effectively demonstrates its better performance.
References

[1]
Yang J, Yu K, Gong Y, et al. Linear spatial pyramid matching using sparse coding for image classification[C]//Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009: 17941801.
 [2] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Localityconstrained linear coding for image classification. In CVPR, 2010.
 [3] Chunjie Zhang, ling Liu, Qi Tian, Changsheng Xu, Hanqing Lu, and Songde Ma, ”Image classification by nonnegative sparse coding, lowrank and sparse decomposition, ” in Computer Vision and Pattern Recognition, 2011, pp. 16731680.

[4]
He L, Wang Y. Iterative Support Detection Based Split Bregman Method for Wavelet Frame Based Image Inpainting. IEEE Transactions on Image Processing, vol.23, no.12, pp.5470,5485, Dec. 2014
 [5] Wang Y, Yin W. Sparse signal reconstruction via iterative support detection[J]. SIAM Journal on Imaging Sciences, 2010, 3(3): 462491.
 [6] Gao S, Tsang I W, Chia L T, et al. Local features are not lonely CLaplacian sparse coding for image classification[C]//Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010: 35553561.
 [7] Lee H, Battle A, Raina R, et al. Efficient sparse coding algorithms[C]//Advances in neural information processing systems. 2006: 801808.
 [8] Shen X, Wu Y. A unified approach to salient object detection via low rank matrix recovery[C]//Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012: 853860.
 [9] Ji Z, Theiler J, Chartrand R, et al. SIFTbased Sparse Coding for Largescale Visual Recognition[J]. SPIE Defense Security Sens, 2013.
 [10] Yang J, Zhang Y. Alternating direction algorithms for problems in compressive sensing, ArXiv eprints, 2009[J].
 [11] Hastie T, Tibshirani R, Friedman J, et al. The elements of statistical learning[M]. New York: Springer, 2009.
 [12] Boureau Y L, Bach F, LeCun Y, et al. Learning midlevel features for recognition[C]//Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010: 25592566.
 [13] Mairal J, Bach F, Ponce J. Sparse Modeling for Image and Vision Processing[J]. arXiv preprint arXiv:1411.3230, 2014.
 [14] Lowe D G. Distinctive image features from scaleinvariant keypoints[J]. International journal of computer vision, 2004, 60(2): 91110.
 [15] Csurka G, Dance C, Fan L, et al. Visual categorization with bags of keypoints[C]//Workshop on statistical learning in computer vision, ECCV. 2004, 1(122): 12.
 [16] Sivic J, Zisserman A. Video Google: A text retrieval approach to object matching in videos[C]//Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. IEEE, 2003: 14701477.
 [17] Yang J, Yu K, Huang T. Supervised translationinvariant sparse coding[C]//Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010: 35173524.
 [18] Grauman K, Darrell T. The pyramid match kernel: Discriminative classification with sets of image features[C]//Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on. IEEE, 2005, 2: 14581465.
 [19] Lazebnik S, Schmid C, Ponce J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[C]//Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. IEEE, 2006, 2: 21692178.
 [20] Li L J, FeiFei L. What, where and who? classifying events by scene and object recognition[C]//Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEE, 2007: 18.