Recent years, image classification has been a classical issue in computer vision. Many successful algorithmsYu et al. (2012b); Shi et al. (2018); Wright et al. (2009); Yu et al. (2014); Song et al. (2018); Yu et al. (2013); Wang et al. (2018); Yu et al. (2012a); Yang et al. (2009, 2017); Liu et al. (2014a, 2017); Jiang et al. (2013); Hao et al. (2017); Chan et al. (2015); Nakazawa and Kulkarni (2018); Ji et al. (2014); Xu et al. (2019); Yuan et al. (2016) have been proposed to solve the problem. In these algorithms, there is one category that contributes a lot for image classification which is the sparse representation based method.
Sparse representation is capable of expressing the input sample features as a linear combination of atoms in an overcomplete basis set. Wright et al. (2009) proposed sparse representation based classification (SRC) algorithm which use the -norm regularization term to achieve impressive performance. SRC is the most representative one in the sparse representation based methods. However, in traditional sparse representation based methods, training sample features are directly exploited without considering the discriminative information which is crucial in real applications. That is to say, sparse representation based methods can gain better performance if the discriminative information is properly harnessed.
To handle this problem, dictionary learning (DL) method is introduced to preprocess the training sample features befor classification. DL is a generative model for sparse representation which the concept was firstly prposed by Mallat and Zhang (1993). A few years later, Olshausen and Field (1996, 1997) proposed the application of DL on natural images and then it has been widely used in many fields such as image denoising Chang et al. (2000); Li et al. (2012, 2018), image superresolution Yang et al. (2010); Wang et al. (2012b); Gao et al. (2018), and image classification Liu et al. (2017); Jiang et al. (2013); Chang et al. (2016). A well learned dictionary can help to get significant boost in classification accuracy. Therefore, DL based methods in classification are more and more popular in recent years.
Specificially, there are two strategies are proposed to successfully utilise the discriminative information: i) class specific dictionary learning ii) class shared dictionary learning. The first strategy is to learn specific dictionaries for each class such as Wang et al. (2012a); Yang et al. (2014); Liu et al. (2016). The second strategy is to learn a shared dictionary for all classes. For example, Zhang and Li (2010) proposed discriminative K-SVD(D-KSVD) algorithm to directly add the discriminative information into objective function. Furthermore, Jiang et al. (2013) proposed label consistence K-SVD (LC-KSVD) method which add a label consistence term into the objective function of D-KSVD. The motivation for adding this term is to encourage the training samples from the same class to have similar sparse codes and those from different classes to have dissimilar sparse codes. Thus, the discriminative abilities of the learned dictionary is effectively improved. However, the sparse regularization term in LC-KSVD is -norm which leads to the NP-hard Natarajan (1995) problem. Although some greedy methods such as orthogonal matching pursuit (OMP) Tropp and Gilbert (2007) can help solve this problem to some extent, it is usually to find the suboptimum sparse solution instead of the optimal sparse solution. More specifically, greedy method solve the global optimal problems by finding basis vectors in order of reconstruction errors from small to large until (the sparsity constraint factor) times. Thus, the initialized values are crucial. To this end, -norm based sparse constraint is not conducive to finding a global minimum value to obtain the optimal sparse solution.
In this paper, we propose a novel dictionary learning algorithm named label embedded dictionary learning (LEDL). This method introduces the -norm regularization term to replace the -norm regularization of LC-KSVD. Thus, we can freely select the basis vectors for linear fitting to get optimal sparse solution. In addition, -norm sparse representation is widely used in many fields so that our proposed LEDL method can be extended and applied easily. We show the difference between our proposed LEDL and LC-KSVD in Figure 1. We adopt the alternating direction method of multipliers (ADMM) Boyd et al. (2011) framework and blockwise coordinate descent (BCD) Liu et al. (2014b) algorithm to optimize LEDL. Our work mainly focuses on threefold.
We propose a novel dictionary learning algorithm named label embedded dictionary learning which introduces the -norm regularization term as the sparse constraint. The -norm sparse constraint is able to help easily find the optimal sparse solution.
We verify the superior performance of our method on six benchmark datasets.
The rest of the paper is organized as follows. Section 2 reviews two conventional methods which are SRC and LC-KSVD. Section 3.1 presents LEDL method for image classification. The optimization approach and the convergence are elaborated in Section 3.2. Section 4 shows experimental results on six well-known datasets. Finally, we conclude this paper in Section 5.
2 Related Work
In this section, we overview two related algorithms, including sparse representation based classification (SRC) and label consistent K-SVD (LC-KSVD).
2.1 Sparse representation based classification (SRC)
SRC was proposed by Wright et al. (2009). Assume that we have classes of training samples, denoted by , where is the training sample matrix of class . Each column of the matrix is a training sample feature from the class. The whole training sample matrix can be denoted as , where represents the dimensions of the sample features and is the number of training samples. Supposing that is a testing sample vector, the sparse representation algorithm aims to solve the following objective function:
where, is the regularization parameter to control the tradeoff between fitting goodness and sparseness. The sparse representation based classification is to find the minimum value of the residual error for each class.
where represents the predictive label of , is the sparse code of class. The procedure of SRC is shown in Algorithm 1. Obviously, the residual is associated with only a few images in class .
2.2 Label Consistent K-SVD (LC-KSVD)
Jiang et al. (2013) proposed LC-KSVD to encourage the similarity among representations of samples belonging to the same class in D-KSVD. The authors proposed to combine the discriminative sparse codes error with the reconstruction error and the classification error to form a unified objective function, which gave discriminative sparse codes matrix , label matrix and training sample matrix . The objective function is defined as follows:
where is the sparsity constraint factor, making sure that has no more than nonzero entries. The dictionary , where is the number of atoms in the dictionary, and is the sparse codes of training sample matrix .
is a classifier learned from the given label matrix. We hope the
can return the most probable class this sample belongs to.
is a linear transformation relys on. and are the regularization parameters balancing the discriminative sparse codes error and the classification contribution to the overall objective, respectively. The algorithm is shown in Algorithm 2. Here, we denote as the iteration number and means the value of matrix after iteration.
While the LC-KSVD algorithm exploits the -norm regularization term to control the sparseness, it is difficult to find the optimal sparse solution to a general image recognition. The reason is that LC-KSVD use OMP method to optimise the objective function which usually obtain the suboptimal sparse solution unless finding the perfect initialized values.
In this section, we first give our proposed label embedded dictionary learning algorithm. Then we elaborate the optimization of the objective function.
3.1 Proposed Label Embedded Dictionary Learning (LEDL)
Motivated by that the optimal sparse solution can not be found easily with -norm regularization term, we propose a novel dictionary learning method named label embedded dictionary learning (LEDL) for image classification. This method introduces the -norm regularization term to replace the -norm regularization of LC-KSVD. Thus, we can freely select the basis vectors for linear fitting to get optimal sparse solution. The objection function is as follows:
where, denotes the column vector of matrix . The -norm regularization term is utilized to enforce sparsity and is the regularization parameter which has the same function as in Equation (1).
3.2 Optimization of Objective Function
Consider the optimization problem (4) is not jointly convex in both , , and , it is separately convex in either (with , , fixed), (with , , fixed), (with , , fixed) or (with , , fixed). To this end, the optimization problem can be recognised as four optimization subproblems which are finding sparse codes () and learning bases (, , ), respectively. Here, we employ the alternating direction method of multipliers (ADMM) Boyd et al. (2011) framework to solve the first subproblem and the blockwise coordinate descent (BCD) Liu et al. (2014b) algorithm for the rest subproblem. The complete process of LEDL is shown in Figure 2.
3.2.1 ADMM for finding sparse codes
While fixing , and , we introduce an auxiliary variable and reformulate the LEDL algorithm into a linear equality-constrained problem with respect to each iteration has the closed-form solution. The objective function is as follows:
While utilising the ADMM framework with fixed , and , the lagrangian function of the problem (5) is rewritten as:
where is the augmented lagrangian multiplier and is the penalty parameter.
After fixing , and , we initialize the , and to be zero matrices.
Equation (6) can be solved as follows:
Updating while fixing , , , and :
The closed form solution of is
Updating while fixing , , , and
The closed form solution of Z is
is the identity matrix and
is the zero matrix.
Updating the Lagrangian multiplier
where the in Equation (11) is the gradient of gradient descent (GD) method, which has no relationship with the in Equation (6). In order to make better use of ADMM framework, the in Equation (11) can be rewritten as .
3.2.2 BCD for learning bases
Without consisdering the sparseness regulariation term in Equation (5), the constrained minimization problem of (4) with respect to the single column has the closed-form solution which can be solved by BCD method. The objective function can be rewritten as follows:
We initialize , and to be random matrices and normalize them, respectively. After that we use BCD method to update , and .
Updating while fixing , , , and
The closed-form solution of single column of is
where , denotes the row vector of matrix .
Updating while fixing , , , and
The closed-form solution of single column of is
Updating while fixing , , , and
The closed-form solution of single column of is
3.2.3 Convergence Analysis
Assume that the result of the objective function after iteration is defined as . Since the minimum point is obtained by ADMM and BCD methods, each method will monotonically decrease the corresponding objective function after about 100 iterations. Considering that the objective function is obviously bounded below and satisfies the Equation (20), it converges. Figure 3 shows the convergence curve of the proposed LEDL algorithm by using four well-known datasets. The results demonstrate that our proposed LEDL algorithm has fast convergence and low complexity.
3.2.4 Overall Algorithm
The overall updating procedures of proposed LEDL algorithm is summarized in Algorithm 3. Here, is the maximum number of iterations, is a squre matrix with all elements 1 and indicates element dot product. By iterating , , , , and alternately, the sparse codes are obtained, and the corresponding bases are learned.
In testing stage, the constraint terms are based on -norm sparse constraint. Here, we exploit the learned dictionary to fit the testing sample to obtain the sparse codes . Then, we use the trained classfier to predict the label of by calculating .
4 Experimental results
In this section, we utilize several datasets (Extended YaleB Georghiades et al. (2001), CMU PIE Sim et al. (2002), UC Merced Land Use Yang and Newsam (2010), AID Xia et al. (2017), Caltech101 Fei-Fei et al. (2007) and USPS Hull (1994)) to evaluate the performance of our algorithm and compare it with other state-of-the-art methods such as SRC Wright et al. (2009), LC-KSVD Jiang et al. (2013), CRC Zhang et al. (2011) and CSDL-SRC Liu et al. (2016). In the following subsection, we first give the experimental settings. Then experiments on these six datasets are analyzed. Moreover, some discussions are listed finally.
4.1 Experimental settings
For all the datasets, in order to eliminate the randomness, we carry out every experiment 8 times and the mean of the classification rates is reported. And we randomly select 5 samples per class for training in all the experiments. For Extended YaleB dataset and CMU PIE dataset, each image is cropped to , pulled into column vector, and normalized to form the raw normalized features. For UC Merced Land Use dataset, AID dataset, we use resnet model He et al. (2016) to extract the features. Specifically, the layer is utilized to extract 2048-dimensional vectors for them. For Caltech101 dataset, we use the layer of resnet model and spatial pyramid matching (SPM) with two layers (the second layer include five part, such as left upper, right upper, left lower, right lower, center) to extract 12288-dimensional vectors. And finally, each of the images in USPS dataset is resized into vectors.
For convenience, the dictionary size () is fixed to the twice the number of training samples. In addition, we set and initial , then decrease the in each iteration. Moreover, there are other three parameters (, and ) need to be adjust to achieve the highst classification rates. The details are showed in the following subsections.
4.2 Extended YaleB Dataset
The Extended YaleB dataset contains face images from 38 individuals, each having 64 frontal images under varying illumination conditions. Figure 4 shows some images of the dataset.
In addition, we set , , in our experiment. The experimental results are summarized in Table (1). We can see that our proposed LEDL algorithm achieves superior performance to other classical classification methods by an improvement of at least . Compared with -norm sparsity constraint based dictionary learning algorithm LC-KSVD, our proposed -norm sparsity constraint based dictionary learning algorithm LEDL algorithm exceeds it . The reason of the high improvement between LC-KSVD and LEDL is that -norm sparsity constraint leads to NP-hard problem which is not conductive to finding the optimal sparse solution for the dictionary. In order to further illustrate the performance of our method, we choose the first 20 classes samples as a subdataset and show the confusion matrices in Figure 5. As can be seen that, our method achieves higher classification rates in all the chosen classes than LC-KSVD. Especially in class1, class2, class3, class10, class16, LEDL can achieve at least performance gain than LC-KSVD.
4.3 CMU PIE Dataset
The CMU PIE dataset consists of images of 68 individuals with 43 different illumination conditions. Each human is under 13 different poses and with 4 different expressions. In Figure 6, we list several samples from this dataset.
The comparasion results are showed in Table 1, we can see that our proposed LEDL algorithm outperforms over other well-known methods by an improvement of at least . To be attention, LEDL is capable of exceeding LC-KSVD in this dataset. The optimal parameters are , , .
4.4 UC Merced Land Use Dataset
The UC Merced Land Use dataset is widely used for aerial image classification. It consists of totally land-use images of classes. Some samples are showed in Figure 7.
In Table 1, we can see that our proposed LEDL algorithm is only similar with CRC and still outperforms the other methods. Compared with LC-KSVD, LEDL achieves the higher accuracy by an improvement of . Here, we set , , to get the optimal result. The confusion matrices of the UC Merced Land Use dataset for all classes are shown in Figure 8. We can see that, in all classes except the tennis, LEDL almost achieve better results compared with LC-KSVD. In several classes such as building, freeway, river, and sparse, our method achieves superior performance to LC-KSVD by an improvement of at least .
4.5 AID Dataset
The AID dataset is a new large-scale aerial image dataset which can be downloaded from Google Earth imagery. It contains images from 30 aerial scene types. In Figure 9, we show several images of this dataset.
Table 1 illustrates the effectiveness of LEDL for classifying images. We adjust , , to achieve the highest accuracy by an improvement of at least in the five algorithms. While compared with LC-KSVD, LEDL achieves an improvement of .
4.6 Caltech101 Dataset
The caltech101 dataset includes images of classes in total, which are consisted of cars, faces, flowers and so on. Each category have about 40 to 800 images and most of them have about 50 images. In figure 10, we show several images of this dataset.
4.7 USPS Dataset
The USPS dataset contains handwritten digit images from 0 to 9 which come from the U.S. Postal System. We list several samples from this dataset in Figure 12.
Table 1 shows the comparasion results of five algorithms and it is easy to find out that our proposed LEDL algorithm outperforms over other well-known methods by an improvement of at least . And our proposed method achieves an improvement of over LC-KSVD method. The optimal parameters are , , .
From the experimental results on six datasets, we can obtain the following conclusions.
(1) All the above experimental results illustrate that, our proposed LEDL algorithm is an effective and general classifier which can achieve superior performacne to state-of-the-art methods on various datasets, especially on Extended YaleB dataset, CMU PIE dataset and USPS dataset.
(2) Our proposed LEDL method introduces the -norm regularization term to replace the -norm regularization of LC-KSVD. However, compared with LC-KSVD algorithm, LEDL method is always better than it on the six datasets. Moreover, on the two face datasets and USPS dataset, our method can exceed LC-KSVD nearly .
(3) Confusion matrices of LEDL and LC-KSVD on three datasets are shown in Figure 5 8 and 11. They clearly illustrate the superiority of our method. Specificially, for Extended YaleB dataset, our method achieve outstanding performance in five classes (class1, class2, class3, class10, class16). For UC Merced dataset, LEDL almost achieve better classification rates than LC-KSVD in all classes except the tennis class. For Caltech101 dataset, our proposed LEDL method perform much better than LC-KSVD method in some classes such as beaver, binocular, brontosaurus, cannon and ceiling fan.
In this paper, we propose a Label Embedded Dictionary Learning (LEDL) algorithm. Specifically, we introduce the -norm regularization term to replace the -norm regularization term of LC-KSVD which can help to avoid the NP-hard problem and find optimal solution easily. Furthermore, we propose to adopt ADMM algorithm to solve -norm optimization problem and BCD algorithm to update the dictionary. Besides, extensive experiments on six well-known benchmark datasets have proved the superiority of our proposed LEDL algorithm.
This research was funded by the National Natural Science Foundation of China (Grant No. 61402535, No. 61671480), the Natural Science Foundation for Youths of Shandong Province, China (Grant No. ZR2014FQ001), the Natural Science Foundation of Shandong Province, China(Grant No. ZR2018MF017), Qingdao Science and Technology Project (No. 17-1-1-8-jch), the Fundamental Research Funds for the Central Universities, China University of Petroleum (East China) (Grant No. 16CX02060A, 17CX02027A), and the Innovation Project for Graduate Students of China University of Petroleum(East China) (No. YCX2018063).
- Aharon et al. (2006) Aharon, M., Elad, M., Bruckstein, A., et al., 2006. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing 54 (11), 4311–4322.
Boyd et al. (2011)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning 3 (1), 1–122.
Chan et al. (2015)
Chan, T.-H., Jia, K., Gao, S., Lu, J., Zeng, Z., Ma, Y., 2015. Pcanet: A simple deep learning baseline for image classification? IEEE Transactions on image processing 24 (12), 5017–5032.
- Chang et al. (2016) Chang, H., Yang, M., Yang, J., 2016. Learning a structure adaptive dictionary for sparse representation based classification. Neurocomputing 190 (19), 124–131.
- Chang et al. (2000) Chang, S. G., Yu, B., Vetterli, M., 2000. Adaptive wavelet thresholding for image denoising and compression. IEEE Transactions on image processing 9 (9), 1532–1546.
- Fei-Fei et al. (2007) Fei-Fei, L., Fergus, R., Perona, P., 2007. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer vision and Image understanding 106 (1), 59–70.
Gao et al. (2018)
Gao, D., Hu, Z., Ye, R., 2018. Self-dictionary regression for hyperspectral image super-resolution. Remote Sensing 10 (10), 1574–1596.
Georghiades et al. (2001)
Georghiades, A. S., Belhumeur, P. N., Kriegman, D. J., 2001. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on pattern analysis and machine intelligence 23 (6), 643–660.
- Hao et al. (2017) Hao, S., Wang, W., Yan, Y., Bruzzone, L., 2017. Class-wise dictionary learning for hyperspectral image classification. Neurocomputing 220 (12), 121–129.
He et al. (2016)
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE conference on. IEEE, pp. 770–778.
- Hull (1994) Hull, J. J., 1994. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence 16 (5), 550–554.
- Ji et al. (2014) Ji, R., Gao, Y., Hong, R., Liu, Q., Tao, D., Li, X., 2014. Spectral-spatial constraint hyperspectral image classification. IEEE Transactions on geoscience and remote sensing 52 (3), 1811–1824.
- Jiang et al. (2013) Jiang, Z., Lin, Z., Davis, L. S., 2013. Label consistent k-svd: Learning a discriminative dictionary for recognition. IEEE Transactions on pattern analysis and machine intelligence 35 (11), 2651–2664.
- Li et al. (2018) Li, H., He, X., Tao, D., Tang, Y., Wang, R., 2018. Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning. Pattern Recognition 79, 130–146.
- Li et al. (2012) Li, S., Fang, L., Yin, H., 2012. An efficient dictionary learning algorithm and its application to 3-d medical image denoising. IEEE Transactions on biomedical engineering 59 (2), 417–427.
- Liu et al. (2017) Liu, B.-D., Gui, L., Wang, Y., Wang, Y.-X., Shen, B., Li, X., Wang, Y.-J., 2017. Class specific centralized dictionary learning for face recognition. Multimedia Tools and Applications 76 (3), 4159–4177.
- Liu et al. (2016) Liu, B.-D., Shen, B., Gui, L., Wang, Y.-X., Li, X., Yan, F., Wang, Y.-J., 2016. Face recognition using class specific dictionary learning for sparse representation and collaborative representation. Neurocomputing 204 (5), 198–210.
- Liu et al. (2014a) Liu, B.-D., Shen, B., Wang, Y.-X., 2014a. Class specific dictionary learning for face recognition. In: Security, Pattern Analysis, and Cybernetics (SPAC), 2014 IEEE International conference on. IEEE, pp. 229–234.
- Liu et al. (2014b) Liu, B.-D., Wang, Y.-X., Shen, B., Zhang, Y.-J., Wang, Y.-J., 2014b. Blockwise coordinate descent schemes for sparse representation. In: Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International conference on. IEEE, pp. 5267–5271.
- Mallat and Zhang (1993) Mallat, S. G., Zhang, Z., 1993. Matching pursuit with time-frequency dictionaries. IEEE Transactions on signal processing 41 (12), 3397–3415.
- Nakazawa and Kulkarni (2018)
- Natarajan (1995) Natarajan, B. K., 1995. Sparse approximate solutions to linear systems. SIAM journal on computing 24 (2), 227–234.
- Olshausen and Field (1996) Olshausen, B. A., Field, D. J., 1996. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381 (6583), 607–609.
- Olshausen and Field (1997) Olshausen, B. A., Field, D. J., 1997. Sparse coding with an overcomplete basis set: a strategy employed by v1? Vision Research 37 (23), 3311–3325.
- Shi et al. (2018) Shi, H., Zhang, Y., Zhang, Z., Ma, N., Zhao, X., Gao, Y., Sun, J., 2018. Hypergraph-induced convolutional networks for visual classification. IEEE Transactions on neural networks and learning systems.
- Sim et al. (2002) Sim, T., Baker, S., Bsat, M., 2002. The cmu pose, illumination, and expression (pie) database. In: Automatic Face and Gesture Recognition (FG), 2002 IEEE International conference on. IEEE, pp. 53–58.
- Song et al. (2018) Song, Y., Liu, Y., Gao, Q., Gao, X., Nie, F., Cui, R., 2018. Euler label consistent k-svd for image classification and action recognition. Neurocomputing 310 (8), 277–286.
- Tropp and Gilbert (2007) Tropp, J. A., Gilbert, A. C., 2007. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory 53 (12), 4655–4666.
- Wang et al. (2012a) Wang, H., Yuan, C., Hu, W., Sun, C., 2012a. Supervised class-specific dictionary learning for sparse modeling in action recognition. Pattern Recognition 45 (11), 3902–3911.
Wang et al. (2018)
Wang, N., Zhao, X., Jiang, Y., Gao, Y., BNRist, K., 2018. Iterative metric learning for imbalance data classification. In: International Joint Conference on Artificial Intelligence (IJCAI), 2018 Morgan Kaufmann conference on. Morgan Kaufmann, pp. 2805–2811.
- Wang et al. (2012b) Wang, S., Zhang, L., Liang, Y., Pan, Q., 2012b. Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis. In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE conference on. IEEE, pp. 2216–2223.
- Wright et al. (2009) Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., Ma, Y., 2009. Robust face recognition via sparse representation. IEEE Transactions on pattern analysis and machine intelligence 31 (2), 210–227.
Xia et al. (2017)
Xia, G.-S., Hu, J., Hu, F., Shi, B., Bai, X., Zhong, Y., Zhang, L., Lu, X., 2017. Aid: A benchmark data set for performance evaluation of aerial scene classification. IEEE Transactions on geoscience and remote sensing 55 (7), 3965–3981.
- Xu et al. (2019) Xu, J., An, W., Zhang, L., Zhang, D., 2019. Sparse, collaborative, or nonnegative representation: Which helps pattern classification? Pattern Recognition 88, 679–688.
- Yang et al. (2010) Yang, J., Wright, J., Huang, T. S., Ma, Y., 2010. Image super-resolution via sparse representation. IEEE Transactions on image processing 19 (11), 2861–2873.
- Yang et al. (2009) Yang, J., Yu, K., Gong, Y., Huang, T., 2009. Linear spatial pyramid matching using sparse coding for image classification. In: Computer Vision and Pattern Recognition (CVPR), 2009 IEEE conference on. IEEE, pp. 1794–1801.
- Yang et al. (2017) Yang, M., Chang, H., Luo, W., 2017. Discriminative analysis-synthesis dictionary learning for image classification. Neurocomputing 219 (5), 404–411.
- Yang et al. (2014) Yang, M., Zhang, L., Feng, X., Zhang, D., 2014. Sparse representation based fisher discrimination dictionary learning for image classification. International Journal of Computer Vision 109 (3), 209–232.
- Yang and Newsam (2010) Yang, Y., Newsam, S., 2010. Bag-of-visual-words and spatial extensions for land-use classification. In: Advances in Geographic Information Systems (GIS), 2010 ACM International conference on. ACM, pp. 270–279.
- Yu et al. (2012a) Yu, J., Feng, L., Seah, H. S., Li, C., Lin, Z., 2012a. Image classification by multimodal subspace learning. Pattern Recognition Letters 33 (9), 1196–1204.
- Yu et al. (2014) Yu, J., Rui, Y., Tang, Y. Y., Tao, D., 2014. High-order distance-based multiview stochastic learning in image classification. IEEE Transactions on cybernetics 44 (12), 2431–2442.
- Yu et al. (2013) Yu, J., Tao, D., Rui, Y., Cheng, J., 2013. Pairwise constraints based multiview features fusion for scene classification. Pattern Recognition 46 (2), 483–496.
- Yu et al. (2012b) Yu, J., Tao, D., Wang, M., 2012b. Adaptive hypergraph learning and its application in image classification. IEEE Transactions on image processing 21 (7), 3262–3272.
- Yuan et al. (2016) Yuan, L., Liu, W., Li, Y., 2016. Non-negative dictionary based sparse representation classification for ear recognition with occlusion. Neurocomputing 171 (1), 540–550.
- Zhang et al. (2011) Zhang, L., Yang, M., Feng, X., 2011. Sparse representation or collaborative representation: Which helps face recognition? In: Computer Vision (ICCV), 2011 IEEE International conference on. IEEE, pp. 471–478.
- Zhang and Li (2010) Zhang, Q., Li, B., 2010. Discriminative k-svd for dictionary learning in face recognition. In: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE conference on. IEEE, pp. 2691–2698.