Linear Spatial Pyramid Matching Using Non-convex and non-negative Sparse Coding for Image Classification

04/27/2015
by   Chengqiang Bao, et al.
0

Recently sparse coding have been highly successful in image classification mainly due to its capability of incorporating the sparsity of image representation. In this paper, we propose an improved sparse coding model based on linear spatial pyramid matching(SPM) and Scale Invariant Feature Transform (SIFT ) descriptors. The novelty is the simultaneous non-convex and non-negative characters added to the sparse coding model. Our numerical experiments show that the improved approach using non-convex and non-negative sparse coding is superior than the original ScSPM[1] on several typical databases.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/11/2002

Non-negative sparse coding

Non-negative sparse coding is a method for decomposing multivariate data...
10/01/2012

Combined Descriptors in Spatial Pyramid Domain for Image Classification

Recently spatial pyramid matching (SPM) with scale invariant feature tra...
08/04/2012

Recklessly Approximate Sparse Coding

It has recently been observed that certain extremely simple feature enco...
09/28/2013

CSIFT Based Locality-constrained Linear Coding for Image Classification

In the past decade, SIFT descriptor has been witnessed as one of the mos...
03/10/2019

Non-Negative Kernel Sparse Coding for the Classification of Motion Data

We are interested in the decomposition of motion data into a sparse line...
09/22/2014

Fast Low-rank Representation based Spatial Pyramid Matching for Image Classification

Spatial Pyramid Matching (SPM) and its variants have achieved a lot of s...
01/03/2013

A Method for Finding Structured Sparse Solutions to Non-negative Least Squares Problems with Applications

Demixing problems in many areas such as hyperspectral imaging and differ...

1 Introduction

Figure 1: Schematic comparison of the original ScSPM with our proposed NNScSPM, which is an improved algorithm based on the former.

In recent years, image classification, as an important part of image processing problems, has been a research focus in computer vision. Support Vector Machines (SVMs) using Spatial Pyramid Matching (SPM) have been highly successful in image classification. SPM model, as a feature extraction method, is extended from bag-of-words(BoW) model, which is a document representation method in the field of information retrieval. This model treats documents as some keywords combinations, ignores the syntax and sequence of the texts, finally matches the documents with the frequencies of keywords. Researchers have applied this method onto image processing and turned up the bag-of-features(BoF), which represents an image as a histogram of its local features.

Unfortunately, the method is incapable of capturing shapes or locating an object, because of this attribute discarding the spatial order of local features. In order to overcome this issue, spatial pyramid matching (SPM) is proposed, as the most successful extension of the BoF model by incorporating the geometric correspondence search, discriminative codebook learning and the generative part model. The SPM method partitions an image into segments in different scales , and computes the BoF histogram to form a vector representation of the image. In cases where only the scale , SPM reduces to BoF. Due to the utility of the spatial information, SPM method is more efficient than BoF and also has shown very promising performance on many image classification issues.

Figure 1 is a typical flowchart that clearly illustrates the SPM approach. The (a) part is the flow chart of the linear SPM methods. The flow contains SIFT extraction, sparse coding, spatial pooling and classification. The (b) part is our flowchart that is based on sparse coding model. Firstly, in the descriptor layer, feature points like SIFT descriptors can be extracted from the input image. Then a codebook is applied to quantize each descriptor and obtain the code layer. In the next SPM layer, multiple codes from inside each sub-region are pooled together by averaging and normalizing into a histogram. Finally, the histograms from all sub-regions are concatenated together to generate the final representation of the image for the following classification task.

It has been empirically found that, traditional SPM has to use classifiers with a particular type of nonlinear Mercer kernels, e.g. the intersection kernel or the Chi-square kernel for achieve good performance. Accordingly, the cost of nonlinear classifiers, bearing

, computational complexity in training and O(n) for testing in SVM( is the number of support vectors), are very expensive. So it’s impractical for large scale real-world applications.

To settle this issue, Yang et al[1] proposed a method called linear spatial pyramid matching using sparse coding(ScSPM), which achieves state-of-the-art performance on several databases in image categorization experiments. In the following years, some improved methods based on ScSPM are introduced, like LLC[2] and LR-ScSPM[6], which mainly take into account the features’s locality in the feature quantization process based on sparse coding to improve the accuracy of image classification.

In this paper, we propose an improved image classification algorithm based on ScSPM, which can apply non-convex and non-negative properties admirably to image sparse representation to improve the accuracy of image classification. It is well known that there are two fundamental steps in image representation. One is coding, where sparse coding is widely used now, and the other is spatial pooling, which can mainly get the feature representation of images. However, traditional sparse coding followed multi-scale max pooling has no constraints on the sign of coding coefficients. In order to avoid the loss of some information loss in the following max pooling, the non-zero components are conditioned to non-negative in our sparse coding model, as did in

[3]. Considering that the sparsity of images representations is also essential[1, 2, 6], we employ non-convex algorithm-ISD[5], in order to get more sparse representations. In sum, we propose non-convex and non-negative sparse coding and we call the improved ScSPM as NNScSPM.

The remainder of the paper is organized as follows. Section introduces the basic idea of of ScSPM and describes our proposed NNScSPM. Section presents experiment results. Finally, section concludes our paper.

2 Linear SPM Using SIFT Non-convex and Non-negative Sparse Codes

There are two basic feature extraction methods in image classification: vector quantization and sparse coding. In this section, we introduce these two models firstly, and then we propose our model and algorithm.

2.1 Vector Quantization (VQ)

Let be a set of SIFT appearance descriptors in a dimensional feature space, i.e.

. The vector quantization(VQ) method applies the K-means algorithm

[11] to solve the following problem:

(1)

where are the cluster centers to be found, called codebook or dictionary, and denotes the -norm of vectors. The optimization problem can be rewritten into a matrix factorization problem with cluster membership indicators :

where is a cardinality constraint, meaning that only one element of is nonzero; means that all the elements of are nonnegative, and is the norm of , i.e., the summation of the absolute value of each element in . After the optimization problem (2.1) is solved, the index of the only nonzero element in indicates which cluster the vector belongs to.

The objective of Eq.(2) is non-convex and the minimization is always alternatively performed with respect to the labels with fixed, and with respect to with the labels fixed[11]. This phase is called training. After that, the coding phase will be going. The learned will be applied or tested on a new set of and in such cases, the problem (2.1) will be solved with respect to only.

2.2 Sparse Coding

Because of restrictive the constraint , which makes the vector responding to only a element of the codebook, the reconstruction of may be too coarse. To overcome this issue, Kai Yu et al.[1] relaxed the constraint by instead putting an norm regularization on , which enforces to have a small number of nonzero elements. Then the formulation is turned into another problem known as sparse coding (SC):

(3)

Where is a trade-off parameter for balancing the fidelity term and the sparse regularization term. In order to avoid trivial solutions, a unit norm constraint on is typically applied. Normally, the dictionary is an overcomplete basis set, i.e. . Solving (3) is similar to that of the problem (2.1) consisting of a training phase and coding phase.

2.3 Our model and algorithm for Sparse Coding

2.3.1 Our Model: Non-convex and Non-negative Sparse Coding

Because of less restrictive constraint, SC coding can achieve a much lower reconstruction error than VQ coding. Research in image statistics has clearly disclosed that natural image patches are sparse signals, and the sparse representation can help capture salient properties of images.

The SC model is based on norm, which is a popular sparsity enforcement regularization due to its convexity. However, the non-convex sparse regularization such as the widely used norm (), prefers an even more sparse solution. Although there have existed many works based on non-convex penalization for image processing, to our best knowledge, there have existed only few specific works for image classification. The major difficulties with the existing non-convex algorithms are that the global optimal solution cannot be efficiently computed for now, the behavior of a local solution is also hard to analyze and more seriously the prior structural information of the solution is hard to be incorporated.

SC model in [1] followed by max pooling did not use non-negative constraint. It’s well known that in order to satisfy the optimization, negative coefficients are usually needed. Because non-zero components typically reflect remarkable feature information, max pooling will bring the loss in terms of these negative components, and moreover reduce the accurate of image classification[3].

Taking those two situation account, this paper proposes a non-convex and non-negative sparse model based on the convex sparse coding and proposes a multistage convex relaxation algorithm for it via our proposed iterative support detection[5], which has proved to be more sparse than the pure solution in theory in many cases. Our new sparse model is a truncated model with nonnegative constraints as follows:

where are the weights, i.e. the components of corresponding to weights are removed out of the penalty, and will not be penalized [5].

While ones commonly consider the plain model: , i.e., all the weights are the same and equal to , in practice, we might be able to obtain more prior information beyond the sparsity assumption. For example, we might know the locations of certain coefficients of large nonzero absolute values. In such cases, we should not penalize these coefficients in the norm regularization and remove them out of the norm (corresponding weights being ) and use a truncated norm [5].

2.3.2 Our Algorithm: ISD-YAll1 algorithm

The difficulty is that this kind of partial support information is not available beforehand in practice. Correspondingly, in this paper, we propose to take advantage of the idea of the Iterative Support Detection (ISD, for short) in [5] to extract the reliable information about the true solution and set up weights correspondingly. The procedure is an alternating optimization, which repeatedly take the following two steps:

Step 1: Optimize with fixed (initially ): this is a convex problem in .

Step 2: Determine the value of according to the current . The value of weights will be used in Step 1 of the next iteration.

For Step 1, the truncated nonnative norm optimization problems can be efficiently solved via the Yall1 algorithms [10]

. For Step 2, the locations of large nonzeros are estimated from the solution of the last (truncated)

-norm optimization problem via the support detection procedure. Our algorithm is named as ISD-Yall1, and described as follows:

Algorithm The Proposed ISD-Yall1 Algorithm Given X extracted an image and the codebook . 1.Set the iteration and initialize the set of detected entries . 2. while the stopping condition is not satisfied, (a) ; (b) solve truncated minimization for ; (Step 2: using Yall1 method) (c) support detection using as the reference; (Step 1: using threshold-ISD strategy) (d) . Here denotes the universal set of . The support detection, i.e. the set of the nonzeros of large magnitudes, are estimated as follows [5].

(5)

In pooling phase, because of our non-negative model, the pooling function is defined as max, where is the j-th element of , which is used in linear SVM classifier for image classification. is the matrix element at i-th row and j-th column of , and is the number of local descriptors in the region.

3 experiment results

In this section, we report the comparisons results between our proposed non-convex and non-negative model and ScSPM on two widely used public datasets: 15 scenes[19] and UIUC-Sport dataset[20]. Experiments’s parameters setting will be analyzed in this section. Besides our own implementations, we also quote some results directly from the literature, especially those of ScSPM from [1] and [6]. All the experiments were performed under Windows 7 and Matlab (R2013b) running on a desktop with an Inter(R)CPU i5-4590(3.3GHZ) and 8GB of memory.

3.1 Parameters Setting

Local features descriptor is essential to image representation. In our work, we also adopt the widely used SIFT feature due to its excellent performance in image classification. To fairly compare with others, we use 50,000 SIFT descriptors extracted from random patches to train the codebook which is same as [1] in the train phase. In sparse coding phase, the most important two parameters are (i): the sparsity regularization parameter . The performance is best in ScSPM[1] when it is 0.2-0.4. We follow the same setting of the interval (0.2,0.4). (ii): the threshold value, which is used for computing the weights of coefficients. we take the threshold as following:, where the performance is good when is empirically. In our experiments, we compare our results to ScSPM and KSPM, which uses spatial-pyramid histograms and Chi-square kernels[1]. NScSPM is the non-negative sparse model which only use the non-negative constraint, not use the non-convex.

3.2 15 Scene Data Set

Scene contains categories and images in all, with to images per category. The image content is diverse, containing not only indoor scene, such as bedroom, kitchen, but also outdoor scene, such as buildings and country etc. To compare with others work, we randomly select 100 images per class as training data and use the rest as test data. The detailed comparison results are shown in the Table 1.

Method Average Classification Rate()
KSPM 76.730.65
ScSPM[1] 80.280.93
NScSPM 81.300.53
NNScSPM 81.920.42
Table 1: Performance Comparison on 15 Scene Dataset

3.3 UIUC-Sport Data Set

UIUC-Sport contains 8 categories and 1792 images in all, and the image number ranges from 137 to 250. These 8 categories are badminton, bocce, croquet, polo, rock climbing,rowing, sailing and snow boarding. We also randomly select 70 images from each class as training data and use the rest as test data. We list our results in Table 2.

Method Average Classification Rate()
ScSPM 82.850.62
NScSPM 83.530.72
NNScSPM 84.130.37
Table 2: Results on UIUC-Sport Dataset

From the results, we have observed two points that: (i) non-convex and non-negative properties can play an important role on image classification indeed and (ii) our proposed NNScSPM is superior than KSPM and ScSPM on 15 scene dataset and UIUC-Sport dataset.

4 concludes

We propose a non-convex and non-negative sparse coding model for image classification in this paper, which is efficiently solved by proposed ISD-Yall1 algorithm. The non-convex property reflects the sparsity of the image and the non-negative property can avoid the loss in max spatial pooling. Our numerical experiments effectively demonstrates its better performance.

References

  • [1]

    Yang J, Yu K, Gong Y, et al. Linear spatial pyramid matching using sparse coding for image classification[C]//Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009: 1794-1801.

  • [2] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for image classification. In CVPR, 2010.
  • [3] Chunjie Zhang, ling Liu, Qi Tian, Changsheng Xu, Hanqing Lu, and Songde Ma, ”Image classification by non-negative sparse coding, low-rank and sparse decomposition, ” in Computer Vision and Pattern Recognition, 2011, pp. 1673-1680.
  • [4]

    He L, Wang Y. Iterative Support Detection Based Split Bregman Method for Wavelet Frame Based Image Inpainting. IEEE Transactions on Image Processing, vol.23, no.12, pp.5470,5485, Dec. 2014

  • [5] Wang Y, Yin W. Sparse signal reconstruction via iterative support detection[J]. SIAM Journal on Imaging Sciences, 2010, 3(3): 462-491.
  • [6] Gao S, Tsang I W, Chia L T, et al. Local features are not lonely CLaplacian sparse coding for image classification[C]//Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010: 3555-3561.
  • [7] Lee H, Battle A, Raina R, et al. Efficient sparse coding algorithms[C]//Advances in neural information processing systems. 2006: 801-808.
  • [8] Shen X, Wu Y. A unified approach to salient object detection via low rank matrix recovery[C]//Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012: 853-860.
  • [9] Ji Z, Theiler J, Chartrand R, et al. SIFT-based Sparse Coding for Large-scale Visual Recognition[J]. SPIE Defense Security Sens, 2013.
  • [10] Yang J, Zhang Y. Alternating direction algorithms for -problems in compressive sensing, ArXiv e-prints, 2009[J].
  • [11] Hastie T, Tibshirani R, Friedman J, et al. The elements of statistical learning[M]. New York: Springer, 2009.
  • [12] Boureau Y L, Bach F, LeCun Y, et al. Learning mid-level features for recognition[C]//Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010: 2559-2566.
  • [13] Mairal J, Bach F, Ponce J. Sparse Modeling for Image and Vision Processing[J]. arXiv preprint arXiv:1411.3230, 2014.
  • [14] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision, 2004, 60(2): 91-110.
  • [15] Csurka G, Dance C, Fan L, et al. Visual categorization with bags of keypoints[C]//Workshop on statistical learning in computer vision, ECCV. 2004, 1(1-22): 1-2.
  • [16] Sivic J, Zisserman A. Video Google: A text retrieval approach to object matching in videos[C]//Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. IEEE, 2003: 1470-1477.
  • [17] Yang J, Yu K, Huang T. Supervised translation-invariant sparse coding[C]//Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010: 3517-3524.
  • [18] Grauman K, Darrell T. The pyramid match kernel: Discriminative classification with sets of image features[C]//Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on. IEEE, 2005, 2: 1458-1465.
  • [19] Lazebnik S, Schmid C, Ponce J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[C]//Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. IEEE, 2006, 2: 2169-2178.
  • [20] Li L J, Fei-Fei L. What, where and who? classifying events by scene and object recognition[C]//Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEE, 2007: 1-8.