Label Embedded Dictionary Learning for Image Classification

03/07/2019 ∙ by Shuai Shao, et al. ∙ china university of petroleum 0

Recently, label consistent k-svd(LC-KSVD) algorithm has been successfully applied in image classification. The objective function of LC-KSVD is consisted of reconstruction error, classification error and discriminative sparse codes error with l0-norm sparse regularization term. The l0-norm, however, leads to NP-hard issue. Despite some methods such as orthogonal matching pursuit can help solve this problem to some extent, it is quite difficult to find the optimum sparse solution. To overcome this limitation, we propose a label embedded dictionary learning(LEDL) method to utilise the ℓ_1-norm as the sparse regularization term so that we can avoid the hard-to-optimize problem by solving the convex optimization problem. Alternating direction method of multipliers and blockwise coordinate descent algorithm are then used to optimize the corresponding objective function. Extensive experimental results on six benchmark datasets illustrate that the proposed algorithm has achieved superior performance compared to some conventional classification algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent years, image classification has been a classical issue in computer vision. Many successful algorithms 

Yu et al. (2012b); Shi et al. (2018); Wright et al. (2009); Yu et al. (2014); Song et al. (2018); Yu et al. (2013); Wang et al. (2018); Yu et al. (2012a); Yang et al. (2009, 2017); Liu et al. (2014a, 2017); Jiang et al. (2013); Hao et al. (2017); Chan et al. (2015); Nakazawa and Kulkarni (2018); Ji et al. (2014); Xu et al. (2019); Yuan et al. (2016) have been proposed to solve the problem. In these algorithms, there is one category that contributes a lot for image classification which is the sparse representation based method.

Figure 1: The scheme of LEDL is on the right while the LC-KSVD is on the left. The difference between the two methods is the sparse regularization term which LEDL use the -norm regularization term and LC-KSVD use the -norm regularization term. Compared with -norm, the sparsity constraint factor of

-norm is unfixed so that the basis vectors can be selected freely for linear fitting. Thus, our proposed LEDL method can get smaller errors than LC-KSVD.

Sparse representation is capable of expressing the input sample features as a linear combination of atoms in an overcomplete basis set. Wright et al. (2009) proposed sparse representation based classification (SRC) algorithm which use the -norm regularization term to achieve impressive performance. SRC is the most representative one in the sparse representation based methods. However, in traditional sparse representation based methods, training sample features are directly exploited without considering the discriminative information which is crucial in real applications. That is to say, sparse representation based methods can gain better performance if the discriminative information is properly harnessed.

To handle this problem, dictionary learning (DL) method is introduced to preprocess the training sample features befor classification. DL is a generative model for sparse representation which the concept was firstly prposed by Mallat and Zhang (1993). A few years later, Olshausen and Field (1996, 1997) proposed the application of DL on natural images and then it has been widely used in many fields such as image denoising Chang et al. (2000); Li et al. (2012, 2018), image superresolution Yang et al. (2010); Wang et al. (2012b); Gao et al. (2018), and image classification Liu et al. (2017); Jiang et al. (2013); Chang et al. (2016). A well learned dictionary can help to get significant boost in classification accuracy. Therefore, DL based methods in classification are more and more popular in recent years.

Specificially, there are two strategies are proposed to successfully utilise the discriminative information: i) class specific dictionary learning ii) class shared dictionary learning. The first strategy is to learn specific dictionaries for each class such as Wang et al. (2012a); Yang et al. (2014); Liu et al. (2016). The second strategy is to learn a shared dictionary for all classes. For example, Zhang and Li (2010) proposed discriminative K-SVD(D-KSVD) algorithm to directly add the discriminative information into objective function. Furthermore, Jiang et al. (2013) proposed label consistence K-SVD (LC-KSVD) method which add a label consistence term into the objective function of D-KSVD. The motivation for adding this term is to encourage the training samples from the same class to have similar sparse codes and those from different classes to have dissimilar sparse codes. Thus, the discriminative abilities of the learned dictionary is effectively improved. However, the sparse regularization term in LC-KSVD is -norm which leads to the NP-hard Natarajan (1995) problem. Although some greedy methods such as orthogonal matching pursuit (OMP) Tropp and Gilbert (2007) can help solve this problem to some extent, it is usually to find the suboptimum sparse solution instead of the optimal sparse solution. More specifically, greedy method solve the global optimal problems by finding basis vectors in order of reconstruction errors from small to large until (the sparsity constraint factor) times. Thus, the initialized values are crucial. To this end, -norm based sparse constraint is not conducive to finding a global minimum value to obtain the optimal sparse solution.

In this paper, we propose a novel dictionary learning algorithm named label embedded dictionary learning (LEDL). This method introduces the -norm regularization term to replace the -norm regularization of LC-KSVD. Thus, we can freely select the basis vectors for linear fitting to get optimal sparse solution. In addition, -norm sparse representation is widely used in many fields so that our proposed LEDL method can be extended and applied easily. We show the difference between our proposed LEDL and LC-KSVD in Figure 1. We adopt the alternating direction method of multipliers (ADMM) Boyd et al. (2011) framework and blockwise coordinate descent (BCD) Liu et al. (2014b) algorithm to optimize LEDL. Our work mainly focuses on threefold.

  • We propose a novel dictionary learning algorithm named label embedded dictionary learning which introduces the -norm regularization term as the sparse constraint. The -norm sparse constraint is able to help easily find the optimal sparse solution.

  • We propose to utilize the alternating direction method of multipliers (ADMM) Boyd et al. (2011) algorithm and blockwise coordinate descent (BCD) Liu et al. (2014b) algorithm to optimize dictionary learning task.

  • We verify the superior performance of our method on six benchmark datasets.

The rest of the paper is organized as follows. Section  2 reviews two conventional methods which are SRC and LC-KSVD. Section 3.1 presents LEDL method for image classification. The optimization approach and the convergence are elaborated in Section 3.2. Section 4 shows experimental results on six well-known datasets. Finally, we conclude this paper in Section 5.

2 Related Work

In this section, we overview two related algorithms, including sparse representation based classification (SRC) and label consistent K-SVD (LC-KSVD).

2.1 Sparse representation based classification (SRC)

SRC was proposed by Wright et al. (2009). Assume that we have classes of training samples, denoted by , where is the training sample matrix of class . Each column of the matrix is a training sample feature from the class. The whole training sample matrix can be denoted as , where represents the dimensions of the sample features and is the number of training samples. Supposing that is a testing sample vector, the sparse representation algorithm aims to solve the following objective function:

(1)

where, is the regularization parameter to control the tradeoff between fitting goodness and sparseness. The sparse representation based classification is to find the minimum value of the residual error for each class.

(2)

where represents the predictive label of , is the sparse code of class. The procedure of SRC is shown in Algorithm 1. Obviously, the residual is associated with only a few images in class .

Input: , ,
Output:

1:   Code with the dictionary via -minimization.
2:   
3:   for ;; do
4:       Compute the residual
5:   end for
6:   
7:   return  
Algorithm 1 Sparse representation based classification

2.2 Label Consistent K-SVD (LC-KSVD)

Input: , , , , , ,
Output: , , ,

1:   Compute by combining class-specific dictionary items for each class using K-SVD Aharon et al. (2006);
2:   Compute for and using sparse coding;
3:   Compute using ;
4:   Cpmpute using ;
5:   Solve Eq.(3); Use to initialize the dictionary.
6:   Normalize :
7:   return  , , ,
Algorithm 2 Label Consistent K-SVD

Jiang et al. (2013) proposed LC-KSVD to encourage the similarity among representations of samples belonging to the same class in D-KSVD. The authors proposed to combine the discriminative sparse codes error with the reconstruction error and the classification error to form a unified objective function, which gave discriminative sparse codes matrix , label matrix and training sample matrix . The objective function is defined as follows:

(3)

where is the sparsity constraint factor, making sure that has no more than nonzero entries. The dictionary , where is the number of atoms in the dictionary, and is the sparse codes of training sample matrix .

is a classifier learned from the given label matrix

. We hope the

can return the most probable class this sample belongs to.

is a linear transformation relys on

. and are the regularization parameters balancing the discriminative sparse codes error and the classification contribution to the overall objective, respectively. The algorithm is shown in Algorithm 2. Here, we denote as the iteration number and means the value of matrix after iteration.

While the LC-KSVD algorithm exploits the -norm regularization term to control the sparseness, it is difficult to find the optimal sparse solution to a general image recognition. The reason is that LC-KSVD use OMP method to optimise the objective function which usually obtain the suboptimal sparse solution unless finding the perfect initialized values.

3 Methodology

In this section, we first give our proposed label embedded dictionary learning algorithm. Then we elaborate the optimization of the objective function.

3.1 Proposed Label Embedded Dictionary Learning (LEDL)

Motivated by that the optimal sparse solution can not be found easily with -norm regularization term, we propose a novel dictionary learning method named label embedded dictionary learning (LEDL) for image classification. This method introduces the -norm regularization term to replace the -norm regularization of LC-KSVD. Thus, we can freely select the basis vectors for linear fitting to get optimal sparse solution. The objection function is as follows:

(4)

where, denotes the column vector of matrix . The -norm regularization term is utilized to enforce sparsity and is the regularization parameter which has the same function as in Equation (1).

3.2 Optimization of Objective Function

Consider the optimization problem (4) is not jointly convex in both , , and , it is separately convex in either (with , , fixed), (with , , fixed), (with , , fixed) or (with , , fixed). To this end, the optimization problem can be recognised as four optimization subproblems which are finding sparse codes () and learning bases (, , ), respectively. Here, we employ the alternating direction method of multipliers (ADMM) Boyd et al. (2011) framework to solve the first subproblem and the blockwise coordinate descent (BCD) Liu et al. (2014b) algorithm for the rest subproblem. The complete process of LEDL is shown in Figure 2.

Figure 2: The complete process of LEDL algorithm

3.2.1 ADMM for finding sparse codes

While fixing , and , we introduce an auxiliary variable and reformulate the LEDL algorithm into a linear equality-constrained problem with respect to each iteration has the closed-form solution. The objective function is as follows:

(5)

While utilising the ADMM framework with fixed , and , the lagrangian function of the problem (5) is rewritten as:

(6)

where is the augmented lagrangian multiplier and is the penalty parameter. After fixing , and , we initialize the , and to be zero matrices. Equation (6) can be solved as follows:

Updating while fixing , , , and :

(7)

The closed form solution of is

(8)

Updating while fixing , , , and

(9)

The closed form solution of Z is

(10)

where

is the identity matrix and

is the zero matrix.


Updating the Lagrangian multiplier

(11)

where the in Equation (11) is the gradient of gradient descent (GD) method, which has no relationship with the in Equation (6). In order to make better use of ADMM framework, the in Equation (11) can be rewritten as .

(12)

3.2.2 BCD for learning bases

Without consisdering the sparseness regulariation term in Equation (5), the constrained minimization problem of (4) with respect to the single column has the closed-form solution which can be solved by BCD method. The objective function can be rewritten as follows:

(13)

We initialize , and to be random matrices and normalize them, respectively. After that we use BCD method to update , and .

Updating while fixing , , , and

(14)

The closed-form solution of single column of is

(15)

where , denotes the row vector of matrix .

Updating while fixing , , , and

(16)

The closed-form solution of single column of is

(17)

where .

Updating while fixing , , , and

(18)

The closed-form solution of single column of is

(19)

where .

Figure 3: Convergence curve of LEDL Algorithm on four datasets.

3.2.3 Convergence Analysis

Assume that the result of the objective function after iteration is defined as . Since the minimum point is obtained by ADMM and BCD methods, each method will monotonically decrease the corresponding objective function after about 100 iterations. Considering that the objective function is obviously bounded below and satisfies the Equation (20), it converges. Figure 3 shows the convergence curve of the proposed LEDL algorithm by using four well-known datasets. The results demonstrate that our proposed LEDL algorithm has fast convergence and low complexity.

(20)

3.2.4 Overall Algorithm

The overall updating procedures of proposed LEDL algorithm is summarized in Algorithm 3. Here, is the maximum number of iterations, is a squre matrix with all elements 1 and indicates element dot product. By iterating , , , , and alternately, the sparse codes are obtained, and the corresponding bases are learned.

Input: , , , , , , , ,
Output: , , ,

1:   , ,
2:   , ,
3:   , , ,
4:   
5:   while  do
6:       
7:       Update :
8:       
9:       
10:       Update :
11:       
12:       
13:       Update :
14:       
15:       Update , , :
16:       Compute
17:       for ;; do
18:           
19:           
20:           
21:       end for
22:       Update the objective function:
23:       
24:   end while
25:   return  , , ,
Algorithm 3 Label Embedded Dictionary Learning

In testing stage, the constraint terms are based on -norm sparse constraint. Here, we exploit the learned dictionary to fit the testing sample to obtain the sparse codes . Then, we use the trained classfier to predict the label of by calculating .

4 Experimental results

In this section, we utilize several datasets (Extended YaleB Georghiades et al. (2001), CMU PIE Sim et al. (2002), UC Merced Land Use Yang and Newsam (2010), AID Xia et al. (2017), Caltech101 Fei-Fei et al. (2007) and USPS Hull (1994)) to evaluate the performance of our algorithm and compare it with other state-of-the-art methods such as SRC Wright et al. (2009), LC-KSVD Jiang et al. (2013), CRC Zhang et al. (2011) and CSDL-SRC Liu et al. (2016). In the following subsection, we first give the experimental settings. Then experiments on these six datasets are analyzed. Moreover, some discussions are listed finally.

DatasetsMethods SRC CRC CSDL-SRC LC-KSVD LEDL
Extended YaleB
CMU PIE
UC-Merced
AID
Caltech101
USPS
Table 1: Classification rates () on different datasets

4.1 Experimental settings

For all the datasets, in order to eliminate the randomness, we carry out every experiment 8 times and the mean of the classification rates is reported. And we randomly select 5 samples per class for training in all the experiments. For Extended YaleB dataset and CMU PIE dataset, each image is cropped to , pulled into column vector, and normalized to form the raw normalized features. For UC Merced Land Use dataset, AID dataset, we use resnet model He et al. (2016) to extract the features. Specifically, the layer is utilized to extract 2048-dimensional vectors for them. For Caltech101 dataset, we use the layer of resnet model and spatial pyramid matching (SPM) with two layers (the second layer include five part, such as left upper, right upper, left lower, right lower, center) to extract 12288-dimensional vectors. And finally, each of the images in USPS dataset is resized into vectors.

For convenience, the dictionary size () is fixed to the twice the number of training samples. In addition, we set and initial , then decrease the in each iteration. Moreover, there are other three parameters (, and ) need to be adjust to achieve the highst classification rates. The details are showed in the following subsections.

4.2 Extended YaleB Dataset

The Extended YaleB dataset contains face images from 38 individuals, each having 64 frontal images under varying illumination conditions. Figure 4 shows some images of the dataset.

Figure 4: Examples of the Extended YaleB dataset
Figure 5: Confusion matrices on Extended YaleB dataset

In addition, we set , , in our experiment. The experimental results are summarized in Table (1). We can see that our proposed LEDL algorithm achieves superior performance to other classical classification methods by an improvement of at least . Compared with -norm sparsity constraint based dictionary learning algorithm LC-KSVD, our proposed -norm sparsity constraint based dictionary learning algorithm LEDL algorithm exceeds it . The reason of the high improvement between LC-KSVD and LEDL is that -norm sparsity constraint leads to NP-hard problem which is not conductive to finding the optimal sparse solution for the dictionary. In order to further illustrate the performance of our method, we choose the first 20 classes samples as a subdataset and show the confusion matrices in Figure 5. As can be seen that, our method achieves higher classification rates in all the chosen classes than LC-KSVD. Especially in class1, class2, class3, class10, class16, LEDL can achieve at least performance gain than LC-KSVD.

4.3 CMU PIE Dataset

The CMU PIE dataset consists of images of 68 individuals with 43 different illumination conditions. Each human is under 13 different poses and with 4 different expressions. In Figure 6, we list several samples from this dataset.

Figure 6: Examples of the CMU PIE dataset

The comparasion results are showed in Table 1, we can see that our proposed LEDL algorithm outperforms over other well-known methods by an improvement of at least . To be attention, LEDL is capable of exceeding LC-KSVD in this dataset. The optimal parameters are , , .

4.4 UC Merced Land Use Dataset

The UC Merced Land Use dataset is widely used for aerial image classification. It consists of totally land-use images of classes. Some samples are showed in Figure 7.

Figure 7: Examples of the UC Merced dataset

In Table 1, we can see that our proposed LEDL algorithm is only similar with CRC and still outperforms the other methods. Compared with LC-KSVD, LEDL achieves the higher accuracy by an improvement of . Here, we set , , to get the optimal result. The confusion matrices of the UC Merced Land Use dataset for all classes are shown in Figure 8. We can see that, in all classes except the tennis, LEDL almost achieve better results compared with LC-KSVD. In several classes such as building, freeway, river, and sparse, our method achieves superior performance to LC-KSVD by an improvement of at least .

Figure 8: Confusion matrices on UCMerced dataset

4.5 AID Dataset

The AID dataset is a new large-scale aerial image dataset which can be downloaded from Google Earth imagery. It contains images from 30 aerial scene types. In Figure 9, we show several images of this dataset.

Figure 9: Examples of the AID dataset

Table 1 illustrates the effectiveness of LEDL for classifying images. We adjust , , to achieve the highest accuracy by an improvement of at least in the five algorithms. While compared with LC-KSVD, LEDL achieves an improvement of .

4.6 Caltech101 Dataset

The caltech101 dataset includes images of classes in total, which are consisted of cars, faces, flowers and so on. Each category have about 40 to 800 images and most of them have about 50 images. In figure 10, we show several images of this dataset.

Figure 10: Examples of the Caltech101 dataset

As can be seen in Table 1, our proposed LEDL algorithm outperforms all the competing approaches by setting , , and achieves improvements of and over LC-KSVD and other methods, respectively. Here, we also choose the first 20 classes to build the confusion matrices. They are shown in Figure 11.

Figure 11: Confusion matrices on Caltech101 dataset

4.7 USPS Dataset

The USPS dataset contains handwritten digit images from 0 to 9 which come from the U.S. Postal System. We list several samples from this dataset in Figure 12.

Figure 12: Examples of the USPS dataset

Table 1 shows the comparasion results of five algorithms and it is easy to find out that our proposed LEDL algorithm outperforms over other well-known methods by an improvement of at least . And our proposed method achieves an improvement of over LC-KSVD method. The optimal parameters are , , .

4.8 Discussion

From the experimental results on six datasets, we can obtain the following conclusions.

(1) All the above experimental results illustrate that, our proposed LEDL algorithm is an effective and general classifier which can achieve superior performacne to state-of-the-art methods on various datasets, especially on Extended YaleB dataset, CMU PIE dataset and USPS dataset.

(2) Our proposed LEDL method introduces the -norm regularization term to replace the -norm regularization of LC-KSVD. However, compared with LC-KSVD algorithm, LEDL method is always better than it on the six datasets. Moreover, on the two face datasets and USPS dataset, our method can exceed LC-KSVD nearly .

(3) Confusion matrices of LEDL and LC-KSVD on three datasets are shown in Figure 5 8 and 11. They clearly illustrate the superiority of our method. Specificially, for Extended YaleB dataset, our method achieve outstanding performance in five classes (class1, class2, class3, class10, class16). For UC Merced dataset, LEDL almost achieve better classification rates than LC-KSVD in all classes except the tennis class. For Caltech101 dataset, our proposed LEDL method perform much better than LC-KSVD method in some classes such as beaver, binocular, brontosaurus, cannon and ceiling fan.

5 Conclusion

In this paper, we propose a Label Embedded Dictionary Learning (LEDL) algorithm. Specifically, we introduce the -norm regularization term to replace the -norm regularization term of LC-KSVD which can help to avoid the NP-hard problem and find optimal solution easily. Furthermore, we propose to adopt ADMM algorithm to solve -norm optimization problem and BCD algorithm to update the dictionary. Besides, extensive experiments on six well-known benchmark datasets have proved the superiority of our proposed LEDL algorithm.

6 Acknowledgment

This research was funded by the National Natural Science Foundation of China (Grant No. 61402535, No. 61671480), the Natural Science Foundation for Youths of Shandong Province, China (Grant No. ZR2014FQ001), the Natural Science Foundation of Shandong Province, China(Grant No. ZR2018MF017), Qingdao Science and Technology Project (No. 17-1-1-8-jch), the Fundamental Research Funds for the Central Universities, China University of Petroleum (East China) (Grant No. 16CX02060A, 17CX02027A), and the Innovation Project for Graduate Students of China University of Petroleum(East China) (No. YCX2018063).

References

  • Aharon et al. (2006) Aharon, M., Elad, M., Bruckstein, A., et al., 2006. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing 54 (11), 4311–4322.
  • Boyd et al. (2011)

    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning 3 (1), 1–122.

  • Chan et al. (2015)

    Chan, T.-H., Jia, K., Gao, S., Lu, J., Zeng, Z., Ma, Y., 2015. Pcanet: A simple deep learning baseline for image classification? IEEE Transactions on image processing 24 (12), 5017–5032.

  • Chang et al. (2016) Chang, H., Yang, M., Yang, J., 2016. Learning a structure adaptive dictionary for sparse representation based classification. Neurocomputing 190 (19), 124–131.
  • Chang et al. (2000) Chang, S. G., Yu, B., Vetterli, M., 2000. Adaptive wavelet thresholding for image denoising and compression. IEEE Transactions on image processing 9 (9), 1532–1546.
  • Fei-Fei et al. (2007) Fei-Fei, L., Fergus, R., Perona, P., 2007. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer vision and Image understanding 106 (1), 59–70.
  • Gao et al. (2018)

    Gao, D., Hu, Z., Ye, R., 2018. Self-dictionary regression for hyperspectral image super-resolution. Remote Sensing 10 (10), 1574–1596.

  • Georghiades et al. (2001)

    Georghiades, A. S., Belhumeur, P. N., Kriegman, D. J., 2001. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on pattern analysis and machine intelligence 23 (6), 643–660.

  • Hao et al. (2017) Hao, S., Wang, W., Yan, Y., Bruzzone, L., 2017. Class-wise dictionary learning for hyperspectral image classification. Neurocomputing 220 (12), 121–129.
  • He et al. (2016)

    He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE conference on. IEEE, pp. 770–778.

  • Hull (1994) Hull, J. J., 1994. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence 16 (5), 550–554.
  • Ji et al. (2014) Ji, R., Gao, Y., Hong, R., Liu, Q., Tao, D., Li, X., 2014. Spectral-spatial constraint hyperspectral image classification. IEEE Transactions on geoscience and remote sensing 52 (3), 1811–1824.
  • Jiang et al. (2013) Jiang, Z., Lin, Z., Davis, L. S., 2013. Label consistent k-svd: Learning a discriminative dictionary for recognition. IEEE Transactions on pattern analysis and machine intelligence 35 (11), 2651–2664.
  • Li et al. (2018) Li, H., He, X., Tao, D., Tang, Y., Wang, R., 2018. Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning. Pattern Recognition 79, 130–146.
  • Li et al. (2012) Li, S., Fang, L., Yin, H., 2012. An efficient dictionary learning algorithm and its application to 3-d medical image denoising. IEEE Transactions on biomedical engineering 59 (2), 417–427.
  • Liu et al. (2017) Liu, B.-D., Gui, L., Wang, Y., Wang, Y.-X., Shen, B., Li, X., Wang, Y.-J., 2017. Class specific centralized dictionary learning for face recognition. Multimedia Tools and Applications 76 (3), 4159–4177.
  • Liu et al. (2016) Liu, B.-D., Shen, B., Gui, L., Wang, Y.-X., Li, X., Yan, F., Wang, Y.-J., 2016. Face recognition using class specific dictionary learning for sparse representation and collaborative representation. Neurocomputing 204 (5), 198–210.
  • Liu et al. (2014a) Liu, B.-D., Shen, B., Wang, Y.-X., 2014a. Class specific dictionary learning for face recognition. In: Security, Pattern Analysis, and Cybernetics (SPAC), 2014 IEEE International conference on. IEEE, pp. 229–234.
  • Liu et al. (2014b) Liu, B.-D., Wang, Y.-X., Shen, B., Zhang, Y.-J., Wang, Y.-J., 2014b. Blockwise coordinate descent schemes for sparse representation. In: Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International conference on. IEEE, pp. 5267–5271.
  • Mallat and Zhang (1993) Mallat, S. G., Zhang, Z., 1993. Matching pursuit with time-frequency dictionaries. IEEE Transactions on signal processing 41 (12), 3397–3415.
  • Nakazawa and Kulkarni (2018)

    Nakazawa, T., Kulkarni, D. V., 2018. Wafer map defect pattern classification and image retrieval using convolutional neural network. IEEE Transactions on semiconductor manufacturing 31 (2), 309–314.

  • Natarajan (1995) Natarajan, B. K., 1995. Sparse approximate solutions to linear systems. SIAM journal on computing 24 (2), 227–234.
  • Olshausen and Field (1996) Olshausen, B. A., Field, D. J., 1996. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381 (6583), 607–609.
  • Olshausen and Field (1997) Olshausen, B. A., Field, D. J., 1997. Sparse coding with an overcomplete basis set: a strategy employed by v1? Vision Research 37 (23), 3311–3325.
  • Shi et al. (2018) Shi, H., Zhang, Y., Zhang, Z., Ma, N., Zhao, X., Gao, Y., Sun, J., 2018. Hypergraph-induced convolutional networks for visual classification. IEEE Transactions on neural networks and learning systems.
  • Sim et al. (2002) Sim, T., Baker, S., Bsat, M., 2002. The cmu pose, illumination, and expression (pie) database. In: Automatic Face and Gesture Recognition (FG), 2002 IEEE International conference on. IEEE, pp. 53–58.
  • Song et al. (2018) Song, Y., Liu, Y., Gao, Q., Gao, X., Nie, F., Cui, R., 2018. Euler label consistent k-svd for image classification and action recognition. Neurocomputing 310 (8), 277–286.
  • Tropp and Gilbert (2007) Tropp, J. A., Gilbert, A. C., 2007. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory 53 (12), 4655–4666.
  • Wang et al. (2012a) Wang, H., Yuan, C., Hu, W., Sun, C., 2012a. Supervised class-specific dictionary learning for sparse modeling in action recognition. Pattern Recognition 45 (11), 3902–3911.
  • Wang et al. (2018)

    Wang, N., Zhao, X., Jiang, Y., Gao, Y., BNRist, K., 2018. Iterative metric learning for imbalance data classification. In: International Joint Conference on Artificial Intelligence (IJCAI), 2018 Morgan Kaufmann conference on. Morgan Kaufmann, pp. 2805–2811.

  • Wang et al. (2012b) Wang, S., Zhang, L., Liang, Y., Pan, Q., 2012b. Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis. In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE conference on. IEEE, pp. 2216–2223.
  • Wright et al. (2009) Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., Ma, Y., 2009. Robust face recognition via sparse representation. IEEE Transactions on pattern analysis and machine intelligence 31 (2), 210–227.
  • Xia et al. (2017)

    Xia, G.-S., Hu, J., Hu, F., Shi, B., Bai, X., Zhong, Y., Zhang, L., Lu, X., 2017. Aid: A benchmark data set for performance evaluation of aerial scene classification. IEEE Transactions on geoscience and remote sensing 55 (7), 3965–3981.

  • Xu et al. (2019) Xu, J., An, W., Zhang, L., Zhang, D., 2019. Sparse, collaborative, or nonnegative representation: Which helps pattern classification? Pattern Recognition 88, 679–688.
  • Yang et al. (2010) Yang, J., Wright, J., Huang, T. S., Ma, Y., 2010. Image super-resolution via sparse representation. IEEE Transactions on image processing 19 (11), 2861–2873.
  • Yang et al. (2009) Yang, J., Yu, K., Gong, Y., Huang, T., 2009. Linear spatial pyramid matching using sparse coding for image classification. In: Computer Vision and Pattern Recognition (CVPR), 2009 IEEE conference on. IEEE, pp. 1794–1801.
  • Yang et al. (2017) Yang, M., Chang, H., Luo, W., 2017. Discriminative analysis-synthesis dictionary learning for image classification. Neurocomputing 219 (5), 404–411.
  • Yang et al. (2014) Yang, M., Zhang, L., Feng, X., Zhang, D., 2014. Sparse representation based fisher discrimination dictionary learning for image classification. International Journal of Computer Vision 109 (3), 209–232.
  • Yang and Newsam (2010) Yang, Y., Newsam, S., 2010. Bag-of-visual-words and spatial extensions for land-use classification. In: Advances in Geographic Information Systems (GIS), 2010 ACM International conference on. ACM, pp. 270–279.
  • Yu et al. (2012a) Yu, J., Feng, L., Seah, H. S., Li, C., Lin, Z., 2012a. Image classification by multimodal subspace learning. Pattern Recognition Letters 33 (9), 1196–1204.
  • Yu et al. (2014) Yu, J., Rui, Y., Tang, Y. Y., Tao, D., 2014. High-order distance-based multiview stochastic learning in image classification. IEEE Transactions on cybernetics 44 (12), 2431–2442.
  • Yu et al. (2013) Yu, J., Tao, D., Rui, Y., Cheng, J., 2013. Pairwise constraints based multiview features fusion for scene classification. Pattern Recognition 46 (2), 483–496.
  • Yu et al. (2012b) Yu, J., Tao, D., Wang, M., 2012b. Adaptive hypergraph learning and its application in image classification. IEEE Transactions on image processing 21 (7), 3262–3272.
  • Yuan et al. (2016) Yuan, L., Liu, W., Li, Y., 2016. Non-negative dictionary based sparse representation classification for ear recognition with occlusion. Neurocomputing 171 (1), 540–550.
  • Zhang et al. (2011) Zhang, L., Yang, M., Feng, X., 2011. Sparse representation or collaborative representation: Which helps face recognition? In: Computer Vision (ICCV), 2011 IEEE International conference on. IEEE, pp. 471–478.
  • Zhang and Li (2010) Zhang, Q., Li, B., 2010. Discriminative k-svd for dictionary learning in face recognition. In: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE conference on. IEEE, pp. 2691–2698.