CSIFT Based Locality-constrained Linear Coding for Image Classification

In the past decade, SIFT descriptor has been witnessed as one of the most robust local invariant feature descriptors and widely used in various vision tasks. Most traditional image classification systems depend on the luminance-based SIFT descriptors, which only analyze the gray level variations of the images. Misclassification may happen since their color contents are ignored. In this article, we concentrate on improving the performance of existing image classification algorithms by adding color information. To achieve this purpose, different kinds of colored SIFT descriptors are introduced and implemented. Locality-constrained Linear Coding (LLC), a state-of-the-art sparse coding technology, is employed to construct the image classification system for the evaluation. The real experiments are carried out on several benchmarks. With the enhancements of color SIFT, the proposed image classification system obtains approximate 3 accuracy on the Caltech-101 dataset and approximate 4 classification accuracy on the Caltech-256 dataset.


page 1

page 2

page 3

page 4


Action Classification with Locality-constrained Linear Coding

We propose an action classification algorithm which uses Locality-constr...

Linear Spatial Pyramid Matching Using Non-convex and non-negative Sparse Coding for Image Classification

Recently sparse coding have been highly successful in image classificati...

Towards Automated Biometric Identification of Sea Turtles (Chelonia mydas)

Passive biometric identification enables wildlife monitoring with minima...

Color and Shape Content Based Image Classification using RBF Network and PSO Technique: A Survey

The improvement of the accuracy of image query retrieval used image clas...

Local Color Contrastive Descriptor for Image Classification

Image representation and classification are two fundamental tasks toward...

ColorNet: Investigating the importance of color spaces for image classification

Image classification is a fundamental application in computer vision. Re...

Recklessly Approximate Sparse Coding

It has recently been observed that certain extremely simple feature enco...

I Introduction

Scale invariant feature transform (SIFT) descriptors [1] are widely used in many vision tasks, such as object recognition, image classification, video retrieval, etc. It has been witnessed a very robust local invariant feature descriptors in respect of different geometrical changes. However, SIFT was mainly developed for gray images, the color information of the objects are neglected. Therefore, two objects with completely different colors may be regarded as the same. To overcome this limitation, different kinds of Colored SIFT (CSIFT) descriptors were proposed and developed by researchers to utilize the color information inside the SIFT descriptors [2] [3] [4] [5] [6]. With the enhancement of color information, CSIFT descriptors can achieve better performances in resisting some certain photometric changes. One example can be found in [3], which shows that CSIFT is more stable than SIFT in case of illumination changes.

On the other hand, the bag-of-features (BoF) [7] [8] joined with the spatial pyramid matching (SPM) kernel [9] has been employed to build the recent state-of-the-art image classification systems. In BoF, images are considered as sets of unordered local appearance descriptors, which are clustered into discrete visual words for the representation of images in semantic classification.

SPM divides an image into segments in different scales

, computes the BoF histogram within each segment, and finally concatenates all the histograms to build a spatial location sensitive descriptor of the image. In order to obtain better classification performance, a codebook (a set of visual words), also named dictionary, is constructed to represent the extracted descriptors. Traditional SPM uses clustering techniques like K-means

vector quantization (VQ) to generate the codebook. Despite their efficiency, the obtained codebooks usually suffer from several drawbacks such as distortion errors and low discriminative ability [10]. A linear SPM based on sparse coding (ScSPM) method [11] was proposed by Yang et al

. to relaxing the restrictive cardinality constraint of VQ. By generalizing vector quantization to sparse coding followed by multi-scale spatial max-pooling, ScSPM significantly outperforms the traditional SPM kernel on histograms and is even better than the nonlinear SPM kernels on several benchmarks.

Yu et al. [12]

demonstrated that under certain assumptions locality is more essential than sparsity for the training of nonlinear classifiers and proposed a modification of SC, named

Local Coordinate Coding (LCC). However, in both SC and LCC, the computationally expensive L1-norm optimization problem is to be solved. Wang et al. developed a faster implementation of LCC, named locality-constrained linear coding (LLC) [13], which utilizes the locality constraint to project each descriptor into its local-coordinate system. It achieves the state-of-the-art image classification accuracy even just using a linear SVM classifier.

According to our literature survey, although various kinds of sparse representation (SR) based image classification algorithms with state-of-the-art performances have been developed, most of them use only luminance-based SIFT descriptors [11] [13] [14] [15] [16] [10]. Using color information can improve the robustness of traditional SIFT descriptor in respect of color variations and the geometrical changes. However, facing the diverse CSIFT descriptors, the following questions are worth to be studied.

  • Which CSIFT descriptor is the best for the SR based image classification system?

  • In what extend, the performance of SR based image classification system can be improved by using CSIFT?

To fully exploit the potential of CSIFT descriptors for image category recognition tasks, a CSIFT based image classification system is constructed in this work. As a widely used state-of-the-art SC based encoding algorithm, LLC is employed to encode the CSIFT descriptors for classification. Real experiments with different kinds of CSIFT descriptors demonstrate that significant improvements can be obtained with the enhancement of color information.

The rest of this article is organized as follows. In section II, a reflectance model for color analysis is presented. In section III, different kinds of the CSIFT descriptors and their properties are discussed. Section IV introduces the basic concepts of the LLC. In section V and VI, real experiments are carried out to study the proposed algorithm in various aspects. Finally, in section VII, conclusions are drawn.

Ii Dichromatic Reflectance Model

A physical model of reflection, named Dichromatic Reflection Model, was presented by Shafer in 1985 [17]. In which, the relationship between RGB-values of captured images and the photometric changes, such as shadows and specularities, of environment was investigated. Shafer indicated that the reflection of a incident light can be divided into two distinct components: specular reflection and body reflection. Specular reflection is when a ray of light hits a smooth surface at certain angle. The reflection of that ray will reflect at the same angle as the incident ray. The effect of highlight is caused by the specular reflection. Diffuse reflection is when a ray of light hits the surface which will be reflected back in every direction.

Consider an image of an infinitesimal surface patch of some object. Let the red, green and blue sensors with spectral sensitivities be , and respectively. The corresponding sensor values of the surface image are [17] [18]:


where is the color channel of light, is the wavelength, n is the surface patch normal, s is the direction of the illumination source, and v is the direction of the viewer. is power of the incident light with wavelength , and are the the surface albedo and Fresnel reflectance, respectively. The geometric terms and represent the diffuse reflection and the specular reflection respectively.

In case white illumination and neutral interface reflection model holds, the incident light energy and Fresnel reflectance term are both constant values independent of the wavelength . By assuming the following holds:


Eq. (1) can be simplified:


where is a variable depends only on the sensors and the surface albedo.

Iii Colored SIFT Descriptors

On the basis of the Dichromatic Reflection Model, the stabilities and reliabilities of color spaces in regard of various photometric events such as shadows and specularities are studied in both theoretically and empirically [19] [2] [20]. Although there are many existing color space models, they are correlated to intensity; are linear combinations of ; or normalized with respect to intensity rgb [19]. In this article, we concentrate on investigating CSIFT using essentially different color spaces: RGB, HSV, YCbCr, Opponent, rg and color invariant spaces.

Iii-a Sift

The SIFT algorithm was originally developed for grey images by Lowe [21] [1] for extracting highly discriminative local image features that are invariant to image scaling and rotation, and partially invariant to changes in illumination and viewpoint. It has been used in a broad range of vision tasks, such as image classification, recognition, content-based image-retrieval, etc. The algorithm involves two steps: 1) extraction of the keypoints of an image; 2) computation of the feature vectors characterizing the keypoints. The first step is carried out by convolving the input image with the DoG (difference of Gaussians) function in multiple scales and detecting the extremas of the outputs. The second step is achieved by sampling the magnitudes and orientations of the image gradient in a patch around the detected feature. A 128-D vector of direction histograms is finally constructed as the descriptor of each patch. Since the SIFT descriptor is normalized, it can invariant to the scale of gradient magnitude. But the light color changes will affect it, because the intensity channel is a combination of the R, G and B channels.

Iii-B Rgb-Sift

As the most popular color model, RGB color space provides plenty information for vision applications. In order to embed RGB color information into the SIFT descriptor, we simply calculate the traditional SIFT descriptors on the each channel of RGB color space. By combining the extracted feature, a dimensions descriptor is built ( for each color channel). Compared with conventional luminance-based SIFT, the RGB color gradients (or edges) of the image are captured.

Iii-C Hsv-Sift

HSV-SIFT was introduced by Bosch et al.

and employed for scene classification task

[22]. Similar to RGB SIFT discussed above, they compute SIFT descriptors over all three channels of the HSV color model and produces a dimensional SIFT descriptor for each point. It is worth mention that, H channel of HSV color model has scale-invariant and shift-invariant with respect to light intensity. However, due to the combination of the HSV channels, the whole descriptor has no invariance properties. The conversion from RGB space to HSV space is defined by Eq. (4)(5)(6).


where, is equal to the maximal one of , and is equal to the minimal one of .

Iii-D rg-SIFT

The rg-SIFT descriptors are obtained from the rg color space. It is the normalized RGB color model, used r and g channels to describe the color information in the image (b is constant if r and g are given). rg color space is already scale-invariant with respect to light intensity. The conversion from RGB space to rg space is defined as follows,



As one of the most popular color spaces, YCrCb color space provides very efficient representation of scenes / images and is widely used in the field of video compression. It represents colors in terms of one luminance component (), and two chrominance components ( and ). The YCbCr-SIFT descriptors are computed on all the channels of YCbCr color space. The YCbCr image can be converted from RGB images using equation below:


Iii-F Opponent-SIFT

The Opponent color space was first proposed by Ewald Hering in the late 19th century [23]. It consists three channels (, , ), in which the channel represents luminance of the image, while the remainder describe the opponent color (red-green, blue-yellow) of the image. Opponent-SIFT descriptor is obtained by computing the SIFT descriptor over each channel of the Opponent color space and combine them together. The RGB images transform in the opponent color space is defined by Eq. (10).


Iii-G Color Invariant SIFT

With the inspiration of Dichromatic Reflectance Model (see section II), the color-based photometric invariant scheme was proposed by M. Geusebroek [2]. It was first applied to SIFT descriptor by Abdel-Hakim and Farag [3]

. A linear transformation from RGB to color invariant space is presented as the following:


Where , , , denoting ,respectively, the intensity, the yellow-blue channel, and the red-green channel. , and are the spectral differential quotients, and represent as the same as the above. Measurement of the color invariants is obtained by , and .

Iv Locality-constrained Linear Coding

The bag-of-feature (BoF) approach has now played a leading role in the field of generic image classification research [11] [13] [15]

. It commonly consists of feature extraction, codebook construction, feature coding, and feature pooling. Previous experimentally results shown that, given a visual codebook, choosing an appropriate coding scheme has significant impact on the classification performance.

Different kinds of coding algorithms are developed [11] [13] [15] [10], among them, Locality-constrained Linear Coding (LLC) [13] is considered as one of the most representative methods, which provides both fast coding speed and state-of-the-art classification accuracy. It has been widely cited in academic papers and employed in image classification applications. In this article, LLC is selected for feature coding in our real experiments.

Let denotes a set of -dimensional local descriptors in an image, i.e. . Let be a visual codebook with entries. The coding methods convert each descriptor into a -dimensional code. Unlike the sparse coding, LLC enforces locality constraint instead of sparse constraint. A reconstruction for the basis descriptors can be acquired by optimizing the following equation:


where denotes the element-wise multiplication, and

is the locality adaptor that gives some degree of freedom for each basis descriptor. LLC ensures these descriptors are proportionally similar to the input descriptor

. Specifically,


where , and is the Euclidean distance between and . is used for adjusting the weight decay speed for the locality adaptor .

An approximation is proposed in [13] to accelerate its computational efficiency in practice by ignoring the second term in Eq.(12). They directly use the nearest basis descriptors of to minimize the first term. The encoding process is simplified by solving a much smaller linear system,


This gives the coding coefficients for the selected k basis vectors and other coefficient are set to zero.

V Experimental Results

To evaluate the performances of different kinds of the CSIFT descriptors in a sparse representation based image classification system, two benchmark datasets: Caltech-101[24] and Caltech-256 [25] are employed in the real experiment. Since color information is the prerequisite for the CSIFT descriptors computation, to achieve a fair comparison, the gray images in the Caltech-101 and Caltech-256 are removed. To enable colored images of some categories are sufficient for training a stable classifier (the number of colored images less than 31), we add some new color images of the same category that is to make sure there are at least 31 colored images in each category.

V-a Implementation

In all the experiments, the same processing chain similar the settings refereed in this literature is used to ensure consistency.

  1. Colored SIFT (CSIFT) / SIFT descriptors extraction. The dense CSIFT/SIFT descriptors are extracted as described in section III within a regular spatial grid. The step-size is fixed to 8 pixels and patch size to pixels. The dimension of luminance-based SIFT descriptor is . For CSIFT descriptors, RGB-SIFT, SIFT, HSV-SIFT, YCbCr-SIFT, Opponent-SIFT, rg-SIFT and Color Invariance SIFT (C-SIFT) are implemented for the experimentation.

  2. Codebooks construction. After the CSIFT/SIFT descriptors are extracted, a codebook of size 1024 is created using the K-means clustering method on a randomly selected subset (with size ) of extracted CSIFT descriptors;

  3. Locality-constrained linear coding (LLC). The CSIFT/SIFT descriptors are encoded by LLC using the above constructed coodbooks. the number of neighbors is set to 5 with the shift-invariant constraint;

  4. Pooling with spatial pyramid matching (SPM) [9]. The max-pooling operation is adopted to compute the final descriptor of each image. It is performed with a levels SPM kernel (, and sub-regions in the corresponding levels), leaving a same weight at each layer. The pooled features of the sub-regions are concatenated and normalized to form the final descriptor of each image;

  5. Classification. a one-vs-all linear SVM classifier [26] is used to train the classifier, since it has shown good performances.

V-B Assessment of Color Descriptors on the Caltech-101 Dataset

The propose algorithm is carried out using the color images of Caltech-101 dataset, which contains 101 object categories including animals, flowers, vehicles, shapes with significant variance, etc. Some color images are added to avoid insufficient of training data in certain categories as discussed before. The number of original images in every category still varies from 31 to 800. In order to test the performance with different sizes of training data, different numbers (5, 10,

, 30) of training images per category is evaluated. In each experiment, we randomly select images per category for training and leaving the remainders for testing. The images were resized to keep the maximum size of height and width no larger than 300 pixels with a conserved aspect ratio. For the sake of simplicity, the codebook size is fixed to 1024 (the performance of different codebook sizes will be studied in section VI-A). The corresponding results using different kinds of CSIFT descriptors (RGB-SIFT, SIFT, HSV-SIFT, YCbCr-SIFT, Opponent-SIFT, rg-SIFT and Color Invariance SIFT (C-SIFT)) are illustrated in Table I and Figure 1. According to the experimental results, all the CSIFT/SIFT descriptors achieve their best classification accuracy with 30 training images per class. It indicates that more training data may bring better classification accuracy in testing, while the improvement became slight when the size of the number of training images is more than . Both RGB-SIFT and YCbCr-SIFT outperform state-of-the-art luminance-based SIFT on this dataset. The YCbCr-SIFT achieves the best performance. For instance, when 30 images of each category are used for training, YCbCr-SIFT obtains the average classification accuracy of ; RGB-SIFT provides the second best average classification accuracy (). It is worth mentioning that even without color information, SIFT achieves third best average classification accuracy of . Approximately improvement in average classification accuracy can be obtained by employing CSIFT descriptors.

Training images 5 10 15 20 25 30
RGB-SIFT 45.77 1.02 55.90 0.69 61.260.84 64.840.68 66.700.81 68.651.13
SIFT 45.010.76 55.390.42 60.510.60 64.250.72 66.290.71 68.170.98
HSV-SIFT 33.960.96 44.060.40 50.480.60 54.420.63 57.760.94 59.471.31
YCbCr-SIFT 46.48 0.91 56.970.60 62.09 0.31 65.450.63 68.170.76 69.181.19
Opponent-SIFT 27.000.48 35.070.58 39.310.55 41.930.99 44.211.06 45.870.74
rg-SIFT 32.510.56 41.700.88 46.820.48 50.350.40 53.150.83 55.181.09
C-SIFT 32.670.52 41.900.43 47.870.56 51.020.59 54.050.69 55.720.88
TABLE I: Classification rate comparison on Caltech-101

Fig. 1: The different number of training images per class on the classification performance.

V-C Assessment of Color Descriptors on the Caltech-256 Dataset

A more complex dataset, Caltech-256 [25], is also employed for the experiments. It consists of 256 object classes and totaly 30,607 images, which have much higher intra-class variability and object location variability compared with the images in Caltech-101. Similar to section V-B, the gray images are also removed for fair comparison of various CSIFT/SIFT descriptors. Since there are at least 80 color images per category, no more image is added.

In each experiment, we randomly select ( is fixed for each experiment) images from every category for training and leaving the remainders for testing. For the sake of simplicity, the codebook size is fixed to 4096 (according to our experience, it produces the best classification performance). The images were resized to keep the maximum size of height and width no larger than 300 pixels with conserved aspect ratio. The details of classification results are show in Table II and Figure 2. Among all these descriptors, YCbCr-SIFT produces the best performance as well. In case 60 random selected training images of each category are used, YCbCr-SIFT achieve the average classification accuracy of ; moreover, RGB-SIFT also provides the second best average classification accuracy (). Compared with the performance of luminance-based SIFT descriptors, CSIFT brought approximately enhancement in regard of average classification accuracy, which can be significant in many image classification tasks.

Training images 15 30 45 60
RGB-SIFT 26.700.33 33.040.22 36.560.32 38.710.38
SIFT 25.060.07 31.220.24 34.920.39 37.220.35
HSV-SIFT 21.950.30 28.180.22 31.790.28 34.030.29
YCbCr-SIFT 28.580.32 35.200.18 38.970.34 41.310.27
Opponent-SIFT 14.370.24 17.920.22 20.00.20 21.430.45
Rg- SIFT 18.160.24 22.980.26 25.880.36 27.630.31
c- SIFT 14.560.18 19.300.22 22.130.19 24.190.27
TABLE II: Classification rate comparison on Caltech-256

Fig. 2: The different number of training images per class on the classification performance.

Vi Further Evaluations

The experimental results of section V-B and V-C show that, among the different CSIFT descriptors, YCbCr-SIFT and RGB-SIFT achieve better image classification performance than the state-of-the-art luminance-based SIFT. While, it is well-known that choosing different codebooks size, different numbers of neighbors in LLC and different pooling methods will affect the final classification results. In this section, further evaluations are carried out for more comprehensive studies of these two CSIFT descriptors.

Vi-a Impact of Codebook Size

Firstly, we test the impacts of different codebook sizes (512, 1024 and 2048) using the Caltech-101 dataset. As discussed in section V, the codebooks are trained by the K-Means clustering algorithm. Different numbers (5, 10, , 30) of training images per category are evaluated. The number of neighbors in LLC is set as 5. The corresponding results are presented in Table III, Table IV , Table V and Figure 3. YCbCr-SIFT descriptor outperforms the others in all the tests. In most cases, the highest classification accuracy is obtained by using coodbook of size 1024. However, when the codebook of size 2048 is utilized, the classification accuracies decrease (except YCbCr-SIFT descriptor with 30 training images per category). It may be caused by the over-completeness of the codebooks, which results in large deviations in representing similar local features. It is interesting to notice that, by using more training data, the problem of over-completeness might be overcome. For the instance, YCbCr-SIFT descriptor with codebooks of size 2048 and 30 training images per category achieves the highest average classification accuracy.

Vi-B Impact of Different Number of Neighbors

The performances of the proposed algorithm using different number of neighbors

in LLC are also estimated. The codebook size is fixed as 1024, the number of training image per category is 30. The results are shown in Table

I and Figure 4. With the increase of the neighbor number in LLC, the classification accuracy takes on the trend of rising first, then drops after . The highest average classification accuracy is obtained by using YCbCr-SIFT descriptor(). In contrast to the highest classification result of SIFT (), more than improvement is achieved.

Vi-C Comparison of Pooling Methods

Besides the max-pooling method, sum-pooling is another choice which can also be used to summarize the features of each SPM layer. Table VII, Table VIII show the experimental results using the two methods respectively. In Figure 5 they are illustrated together for comparison. The codebook size is 1024. The number of neighbors used in LLC is 5. It can be noticed that the max-pooling method significantly outperforms sum-pooling.


As can be seen from Figure.5, the best performance is achieved by the combination of “max-pooling” and “ normalization”.

Training images 5 10 15 20 25 30
SIFT 46.010.65 55.810.41 60.980.50 63.990.97 66.230.49 67.101.10
RGB-SIFT 46.570.59 56.280.60 60.920.45 64.100.62 66.010.82 67.101.26
YCbCr-SIFT 46.810.81 57.180.39 62.250.56 65.530.65 67.620.61 69.160.80
TABLE III: The codebooks of size 512
Training images 5 10 15 20 25 30
SIFT 45.010.76 55.390.42 60.510.60 64.250.72 66.290.71 68.170.98
RGB-SIFT 45.771.02 55.900.69 61.260.84 64.840.68 66.700.81 68.651.13
YCbCr-SIFT 46.48 0.91 56.970.60 62.09 0.31 65.450.63 68.170.76 69.181.19
TABLE IV: The codebooks of size 1024
Training images 5 10 15 20 25 30
SIFT 43.560.78 54.180.78 60.080.72 63.180.54 65.680.63 67.911.21
RGB-SIFT 43.790.91 54.330.55 59.890.73 63.070.94 65.770.73 67.940.79
YCbCr-SIFT 44.620.75 55.210.51 61.420.33 65.130.66 67.420.64 69.450.84
TABLE V: The codebooks of size 2048
Number of K 5 10 15 20 25 30
SIFT 67.911.21 68.411.03 68.740.94 68.310.84 68.990.86 68.511.17
RGB-SIFT 67.940.79 68.610.82 68.720.89 68.990.71 69.181.1 68.780.13
YCbCr-SIFT 69.450.84 70.441.03 71.370.72 72.590.63 72.561.22 72.391.47
TABLE VI: Comparison on the sizes of the neighborhood size

Fig. 3: The different number of training images per class on the classification performance.

Fig. 4: The different number of training images per class on the classification performance.
Training images 5 10 15 20 25 30
SIFT 45.010.76 55.390.42 60.510.60 64.250.72 66.290.71 68.170.98
RGB-SIFT 45.771.02 55.900.69 61.260.84 64.840.68 66.700.81 68.651.13
YCbCr-SIFT 46.48 0.91 56.970.60 62.09 0.31 65.450.63 68.170.76 69.181.19
TABLE VII: The performance of
Training images 5 10 15 20 25 30
SIFT 22.140.78 30.140.85 36.380.47 38.981.03 41.860.61 45.01.06
RGB-SIFT 22.670.73 30.640.63 36.260.87 40.040.41 42.710.82 45.240.77
YCbCr-SIFT 22.421.06 31.040.65 36.120.62 39.830.83 43.280.87 45.101.33
TABLE VIII: The performance of

Fig. 5: Impact of different pooling methods.

Vii Conclusion

In this article, CSIFT descriptors are introduced to improve the state-of-the-art Locality-constrained Linear Coding (LLC) based image classification system. Different kinds of CSIFT descriptors are implemented and evaluated with varies settings of the parameters. Real experiments have demonstrated that, by utilizing color information, considerable improvements can be obtained. Among the CSIFT descriptors, YCrCb-SIFT descriptor achieves the most stable and accurate image classification performance. Compared with the highest average classification accuracy achieved by using luminance-base SIFT descriptors, YCrCb-SIFT descriptor acquired approximate increase on the Caltech-101 dataset (see section VI-B) and approximate increase on the Caltech-256 dataset (see section V-C). Besides the YCrCb-SIFT descriptor, RGB-SIFT descriptor also provides favorable performance. As one of the most representative SR based image classification algorithms, the improvements achieve on LLC show that using CSIFT descriptors is an approach with good potential to enhance state-of-the-art SR based image classification systems. On the other hand, although be reported can achieve invariant or discriminatory object recognition, we found that the performances of some others CSIFT descriptors are not as good as expected. One potential solution is combing different CSIFT descriptor to build a better one, we will try to study it in the future work.


This work is supported by the National Natural Science Foundation of China (No.61003143) and the Fundamental Research Funds for Central Universities (No.SWJTU12CX094).


  • [1] David G Lowe. Distinctive image features from scale-invariant keypoints.

    International journal of computer vision

    , 60(2):91–110, 2004.
  • [2] J-M Geusebroek, Rein van den Boomgaard, Arnold W. M. Smeulders, and Hugo Geerts. Color invariance. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(12):1338–1350, 2001.
  • [3] Alaa E Abdel-Hakim and Aly A Farag. Csift: A sift descriptor with color invariant characteristics. In

    Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on

    , volume 2, pages 1978–1983. IEEE, 2006.
  • [4] Joost Van De Weijer, Theo Gevers, and Andrew D Bagdanov. Boosting color saliency in image feature detection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 28(1):150–156, 2006.
  • [5] Gertjan J Burghouts and Jan-Mark Geusebroek. Performance evaluation of local colour invariants. Computer Vision and Image Understanding, 113(1):48–62, 2009.
  • [6] Theo Gevers, Arjan Gijsenij, Joost Van de Weijer, and Jan-Mark Geusebroek. Color in computer vision: Fundamentals and applications, volume 24. Wiley, 2012.
  • [7] Donald Goldfarb and Ashok Idnani. A numerically stable dual method for solving strictly convex quadratic programs. Mathematical programming, 27(1):1–33, 1983.
  • [8] Gabriella Csurka, Christopher Dance, Lixin Fan, Jutta Willamowski, and Cédric Bray. Visual categorization with bags of keypoints. In Workshop on statistical learning in computer vision, ECCV, volume 1, page 22, 2004.
  • [9] Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 2169–2178. IEEE, 2006.
  • [10] Aymen Shabou and Hervé LeBorgne. Locality-constrained and spatially regularized coding for scene categorization. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3618–3625. IEEE, 2012.
  • [11] Jianchao Yang, Kai Yu, Yihong Gong, and Thomas Huang. Linear spatial pyramid matching using sparse coding for image classification. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1794–1801. IEEE, 2009.
  • [12] Kai Yu, Tong Zhang, and Yihong Gong. Nonlinear learning using local coordinate coding. Advances in Neural Information Processing Systems, 22:2223–2231, 2009.
  • [13] Jinjun Wang, Jianchao Yang, Kai Yu, Fengjun Lv, Thomas Huang, and Yihong Gong. Locality-constrained linear coding for image classification. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 3360–3367. IEEE, 2010.
  • [14] Jianchao Yang, Kai Yu, and Thomas Huang. Supervised translation-invariant sparse coding. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 3517–3524. IEEE, 2010.
  • [15] Lingqiao Liu, Lei Wang, and Xinwang Liu. In defense of soft-assignment coding. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2486–2493. IEEE, 2011.
  • [16] Meng Yang, Lei Zhang, Xiangchu Feng, and David Zhang. Fisher discrimination dictionary learning for sparse representation. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 543–550. IEEE, 2011.
  • [17] Steven A Shafer. Using color to separate reflection components. Color Research & Application, 10(4):210–218, 1985.
  • [18] Theo Gevers, Joost Van De Weijer, Harro Stokman, et al. Color feature detection. Color image processing: methods and applications, pages 203–226, 2007.
  • [19] Theo Gevers, WM Smeulders, et al. Color based object recognition. Pattern recognition, 32(3):453–464, 1999.
  • [20] Koen EA van de Sande, Theo Gevers, and Cees GM Snoek. Evaluating color descriptors for object and scene recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(9):1582–1596, 2010.
  • [21] David G Lowe. Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on, volume 2, pages 1150–1157. Ieee, 1999.
  • [22] Anna Bosch, Andrew Zisserman, and Xavier Muoz. Scene classification using a hybrid generative/discriminative approach. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(4):712–727, 2008.
  • [23] Ewald Hering. Outlines of a Theory of the Light Sense, volume 344. Harvard University Press Cambridge, MA, 1964.
  • [24] Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 28(4):594–611, 2006.
  • [25] Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007.
  • [26] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library for large linear classification.

    The Journal of Machine Learning Research

    , 9:1871–1874, 2008.