1 Introduction
Subspace learning is a useful technique in computer vision, pattern recognition and machine learning, particularly for solving the dimensionality reduction, feature selection, feature extraction and face recognition tasks. Subspace learning aims to learn a specific subspace of the original sample space, which has some particular desired properties. This topic has been studied for decades and many impressive algorithms have been proposed. The representative subspace learning algorithms include Principle Component Analysis (PCA)
pca ; spca ; nspca , Linear Discriminant Analysis (LDA) lda , Nonnegative Matrix factorization (NMF) nmf ; gnmf, Independent Component Analysis (ICA)
ica , Locality Preserving Projections (LPP) and so on. In face recognition, subspace learning is also known as appearancebased face recognition. For example, PCA is known as Eigenfaces, LDA is known as Fisherfaces and LPP is known as Laplacianfaces.Some recent studies show that the highdimensional samples may reside on lowdimensional manifolds lle ; isomap and such manifold structures are essential for data clustering and classification lap ; lpp . The manifoldbased subspace learning algorithms may start from Locality Preserving Projections (LPP). LPP constructs an adjacency matrix to weight the distance between each pair of sample points for learning a projection which can preserve the local manifold structures of data. The weight between two nearby points is much greater than that between two distant points. So if two points are close in the original space, then they will be close in the learned subspace as well. However, the conventional LPP only takes the manifold information into consideration. Many researchers make efforts to improve LPP from different perspectives. Discriminant Locality Preserving Projections (DLPP) dlpp ; tdlppfd ; 2dlpp is deemed as one of the most successful extensions of LPP. It improves the discriminating power of LPP via simultaneously maximizing the distance between each two nearby classes and minimizing the original LPP objective. Orthogonal Laplacianfaces (OLPP) olpp imposes an orthogonality constraint to LPP to ensure that the learned projections are mutually orthogonal. Parametric Regularized Locality Preserving Projections (PRLPP) rlpp regulates the LPP space in a parametric manner and extracts useful discriminant information from the whole feature space rather than a reduced projection subspace of PCA. Furthermore, this parametric regularization can be also employed to other LPP based methods, such as Parametric Regularized DLPP (PRDLPP), Parametric Regularized Orthogonal LPP (PROLPP). Inspired by the idea of LPP, Qiao et al spp proposed a novel projection named Sparsity Preserving Projections (SPP) for preserving the sparsity of original sample data and applied it to face recognition.
Our work is mainly based on DLPP. In this paper, we intend to further improve the discriminating power of DLPP from two different aspects. Similar to LPP, DLPP constructs a Laplacian matrix of classes and then improves the discriminating power of LPP via maximizing such matrix. Since the distance between two nearby classes has a greater weight, maximizing the Laplacian matrix of classes actually is equal to maximizing the distance between the nearby classes. Clearly, this strategy cannot guarantee the global optimal class scattering, since the distant classes may be projected closer with each other in such DLPP space than before. In order to obtain the global optimal classes scattering, we use the betweenclass scatter matrix to replace the Laplacian matrix of classes, which is the denominator of DLPP objective. Moreover, inspired by the idea of the collaborative representation crc ; rcr , an norm constraint is imposed to the projections, since we believe that not all the dimensions of the samples are equally important and collaboration should exist among the dimensions. For example, if we consider the face images as the samples in face recognition, each dimension of samples is corresponding to a specific pixel in the face images. Clearly, the pixels in the face area of images play a more important role than the pixels in the background area and collaboration exists naturally between the adjacent pixels.
We name the proposed improved DLPP algorithm Collaborative Discriminant Locality Preserving Projections (CDLPP) and apply it to face recognition. Three popular face databases, namely ORL, AR, and LFWA, are chosen to validate the effectiveness of the proposed algorithm. Extensive experimental results demonstrate that CDLPP remarkably improves the discriminating power of DLPP and outperforms the stateoftheart subspace learning algorithms with a distinct advantage. Moreover, we also compare CDLPP with four of the most popular face recognition approaches in the recent days, namely Linear Regression Classification (LRC)
lrc , Sparse Representation Classification (SRC) sparse , Collaborative Representation Classification (CRC) crc and Relaxed Collaborative Representation Classification (RCR) rcr , (These four algorithms are not the subspace learning algorithms). Even so, CDLPP still outperforms them in all experiments and CDLPP improves the recognition accuracy of RCR from 75% to 81% on LFWA database, which is a very recent challenging face verification and face recognition database. There are three main contributions of our work:
The betweenclass scatter matrix is used to replace the original denominator of DLPP for guaranteeing the global optimum of class scattering.

According to the fact that collaboration exist among the dimensions of samples, we improve the quality of projections via imposing a collaboration constraint. To the best of our knowledge, our work is the first paper introducing the collaboration of dimensions to the subspace learning. Moreover, this is generalizable to other subspace learning algorithms.

A prominent improvement of recognition accuracy of DLPP is obtained by our approach. For example, the gains of CDLPP over DLPP are 12% and 23% on the subset 1 and subset 2 of LFWA database respectively.
The rest of paper is organized as follows: we introduce related works in section 2; section 3 describes the proposed algorithm; experiments are presented in section 4; the conclusion is finally summarized in section 5.
2 Related Works
2.1 Discriminant Locality Preserving Projections
Discriminant Locality Preserving Projections (DLPP) dlpp is one of the most influential LPP algorithms. It improves the discriminating power of LPP via simultaneously minimizing the original Laplacian matrix of LPP and maximizing the Laplacian matrix of classes. Let dimensional matrix ,
be the samples and the vector
be class labels where is the number of classes. Matrix , denotes the samples belonging to class . The dimensional matrix denotes the mean matrix where is the mean of the samples belonging to class . A dimensional row vector presents the projected mean matrix where dimensional column vector is a learned projection. Similarly, the projected sample matrix is denoted as a dimensional row vector . DLPP aims to find a set of projections to map the dimensional original sample space into a dimensional subspace which can preserve the local geometric structures and scatter classes simultaneously. The dimensional matrix denotes projection matrix where . The original objective of Discriminant Locality Preserving Projections (DLPP) is as follows:(1) 
where and denote the weights of the distance between each two homogenous points and the distance between each two mean points respectively. They are the entries of the respective adjacency weight matrices and . These weights are all determined by the distance between two points (either Cosine distance or Euclidean distance) in the original space. It is not difficult to formulate the numerator and denominator of the objective function in Equation 1 into the forms of Laplacian matrices.
(2) 
where matrix is exactly the Laplacian matrix of LPP and matrix is the Laplacian matrix of classes.
2.2 Collaborative Representation Classification
In recent decade, the sparse representation is very popular and extensive works have emphasized the importance of sparsity for classification spca ; nspca ; sparse ; dsparse ; gsr . However, some researcher argue that the collaboration actually plays a more important role in the classification rather than the sparsity, since the samples of different classes share similarities and some samples from class may be very helpful to represent the testing sample with label crc ; rcr . In order to collaboratively represent the query sample using with low computational burden, the norm is used to replace the norm in the objective function of sparse representation. Therefore, the objective function of collaborative representation model is denoted as follows
(3) 
where is the regularization parameter. The role of the regularization term is twofold. First, it makes the least square solution stable, and second, it introduces a certain amount of sparsity to the solution , yet this sparsity is much weaker than that by norm. The solution of this model is
(4) 
After we get the solution, we can apply it to classify as the way of sparse representation.
3 Collaborative Discriminant Locality Preserving Projections
In the algorithm, we improve discriminating power from two aspects. The first one is to use the betweenclass scatter matrix to replace the Laplacian matrix of classes, which is the denominator of the objective function of DLPP. The second one is to impose a dimension collaboration constraint to the model.
The core of LPP algorithms is the construction of affinity matrix and the core of the construction of the affinity matrix is the weighting schemes. Several weighting schemes are available for weighting the distance between two samples. The most common used weighting schemes include the dotproduct weighting and the heatkernel weighting
gnmf . The weighting schemes are all nonlinear and the assigned weight will drop sharply while the distance is increasing. So, the closer points own the greater weight and this strategy makes only the distances between close points able to effectively affect the subspace learning. In that way, if we maximize the denominator of DLPP, which is the Laplacian matrix of classes, only the closer classes can be scattered and the distant classes may be projected much closer. In fisher discriminant analysis, we know that maximizing the betweenclass scatter matrix can obtain the global optimal classes scatter lda . In order to intuitively show the class scattering abilities of betweenclass scatter matrix and Laplacian matrix of classes, we conduct an experiment via maximizing them respectively on Yale database yale . Figure 1 shows the results and each point is a class center. The experimental result demonstrates that using the betweenclass scatter matrix obtains a better performance. Consequently, the discriminating power of DLPP can be further improved via using the betweenclass scatter matrix instead of the Laplacian matrix of classes, and the new objective function of DLPP is denoted as follows(5) 
where
and
The is the mean of whole samples, is the dimensional centering matrix and other notations in this section have been already defined in section 2.
We also test the effectiveness of this modification on all the face databases in experiment section. In order to distinguish from the conventional DLPP, we name this modified DLPP Class Scattering Locality Preserving Projections
(CSLPP). As same as DLPP, the model of CSLPP can be solved by eigenvalue decomposition and the best CSLPP projection
is the eigenvector corresponding to the minimum nonzero eigenvalue of
.As we known, the lowdimensional representation of the sample is achieved by projecting the sample to the learned projection as . From the perspective of numerical computation, is the sum of dot product of the projection and the original sample , . So, each element of projection, , is corresponding to each dimension of the sample, . A greater value of a specific element of projection, no matter it is positive or negative, has more impact to the . In other words, the dimension of sample corresponding to a greater value element of should be valued more. Clearly, the role of each dimension of sample is not equally important. For example, if the sample is the face image, and therefore each pixel of image is corresponding to each dimension of sample, the pixels in the face area should play a more important role than the pixels in the background area. Consequently, a good projection should satisfy the following conditions: the elements of corresponding to the more important dimensions of sample should own a greater value; the values of elements of corresponding to the less important dimensions should tend to zero. Moreover, in the subspace learning, the dimensions of sample often highly exceeds the number of samples. So, a good projection should tend to sparse. Another fact is that the collaboration exists among dimensions in the subspace learning. It can be easily verified in the case of face recognition, for example, the adjacent pixels always collaboratively represent a specific component in the face. The collaboration constraint is imposed to projection to address above issues, since it can emphasize the importance of the collaborations of dimensions in subspace learning and it is also a relaxed sparsity constraint. According to this optimization, Equation 5 can be further modified as follows
(6) 
where controls the amount of additional collaborations required and we name this new Discriminant Locality Preserving Projections (DLPP) algorithm Collaborative Discriminant Locality Preserving Projections (CDLPP). Its objective function can be further formatted as a purely matrix format as follows ^{1}^{1}1 This formulation can be also transformed as a collaborative representation formulation style format as follows:
(7) 
Then, the derivative of can be calculated and let it be equivalent to zero for obtaining the minimum of .
(8) 
Since items and can be treated as two unknown scalars and let them be and respectively, Equation 8 can be formulated as follows
(9)  
where matrix
is an identity matrix and
is a scalar. According to Equation 3, this problem is also an eigenvalue problem and the best CDLPP projection is the eigenvector corresponding to the minimum nonzero eigenvalue of . We can yield the first CDLPP projections for face recognition.4 Experiments
4.1 Face Databases
Three popular face databases including AR AR , ORL ORL and LFWA lfwa Databases are used to evaluate the recognition performances of the proposed methods.
ORL database contains 400 images from 40 subjects ORL . Each subject has ten images acquired at different times. In this database, the subjects’ facial expressions and facial details are varying. And the images are also taken with a tolerance for some tilting and rotation. The size of face image on ORL database is 3232 pixels.
AR database consists of more than 4,000 color images of 126 subjects AR . The database characterizes divergence from ideal conditions by incorporating various facial expressions, luminance alterations, and occlusion modes. Following paper lrc , a subset contains 1680 images with 120 subjects are constructed in our experiment. The size of face image on AR database is 5040 pixels.
LFWA database is an automatically aligned version lfwa of LFW (Labeled Faces in the Wild) database which is a very recent database. And it aims at studying the problem of the unconstrained face recognition. This database is considered as one of the most challenging database since it contains 13233 images with great variations in terms of lighting, pose, age, and even image quality. We copped these images to 120120 pixels around their center and resize these images to 6464 pixels. On this database, we follow the experimental configuration of rcr that uses Local Binary Pattern (LBP) descriptor lbp as the baseline image representation.
4.2 Compared Algorithms
Nine stateoftheart face recognition algorithms are used to compare with ClassScattering Locality Preserving Projections (CSLPP) and Collaborative Discriminant Locality Preserving Projections (CDLPP). Among them, Principal Component analysis (PCA)
pca , Linear Discriminant Analysis (LDA) lda , Locality Preserving Projections (LPP) lpp , Discriminant Locality Preserving Projections (DLPP) dlpp and Neighbour Preserving Embedding (NPE) npe are subspace learning algorithms. Relaxed Collaborative Representation (RCR) rcr , Linear Regression Classification (LRC) lrc , Sparse Representation Classification (SRC) sparse , and Collaborative Representation Classification (CRC) crc are not the subspace learning algorithms, but they are four popular face recognition algorithms in recent days. The experiment results of CRC and SRC are directly referenced from the experiment results reported in rcr while the experiment results of LRC and RCR are obtained via running the codes by ourselves.The codes of LDA, PCA, LPP and NPE is downloaded from Prof. Deng Cai’s web page codes . The code of RCR is provided by Dr. Meng Yang and the code of LRC is provided Mr. Peng Ma.
Methods  Recognition Rates Standard Deviation  

Leaveoneout  7fold  3fold  2fold  
PCA pca  96.96%2.71%  93.69%7.29%  89.17%5.20%  66.73%0.08% 
LDA lda  96.31%4.60%  96.65%3.38%  93.04%3.03%  58.57%1.52% 
NPE npe  93.75%6.83%  92.62%5.15%  90.83%5.46%  61.61%0.25% 
LPP lpp  93.93%6.57%  92.56%4.84%  91.25%4.58%  61.19%0.34% 
DLPP dlpp  95.06%5.46%  94.29%3.81%  92.92%4.17%  65.95%2.69% 
CSLPP 
97.44%3.00%  97.02%2.61%  94.93%3.09%  63.45%3.37% 
CDLPP  99.70%0.53%  99.52%0.85%  99.31%1.20%  69.23%3.11% 
RCR rcr  99.40%0.69%  99.11%1.03%  98.40%1.39%  76.96%1.43% 
LRC lrc  99.58%0.63%  99.40%0.63%  98.47%1.77%  68.75%0.43% 
Methods  Recognition Rates Standard Deviation  

Leaveoneout  5fold  3fold  2fold  
PCA pca  94.25%3.13%  91.25%3.19%  89.72%3.19%  85.25%0.35% 
LDA lda  96.75%3.34%  96.25%1.98%  95.83%3.00%  93.00%0.71% 
NPE npe  97.50%3.12%  94.50%1.90%  92.22%1.73%  90.00%4.54% 
LPP lpp  98.00%2.58%  96.75%1.43%  94.72%3.37%  90.75%3.89% 
DLPP dlpp  98.25%2.06%  97.25%2.05%  97.22%2.10%  93.75%3.18% 
CSLPP  99.00%1.75%  98.00%1.43%  97.50%1.44%  94.50%1.41% 
CDLPP  99.00%1.75%  99.00%1.63%  98.06%2.10%  95.75%1.77% 
RCR rcr  98.25%2.06%  97.5%1.53%  95.83%1.67%  93.75%3.89% 
LRC lrc  97.50%2.75%  97.75%2.05%  96.11%2.92%  88.75%3.18% 
LFWA database  Top Recognition Rate (Retained Dimensions)  

subset1  subset2  
PCA pca  35.34%(529)  34.25%(1261) 
LDA lda  58.90%(141)  67.17%(125) 
NPE npe  61.37%(145)  65.83%(191) 
LPP lpp  58.90%(145)  65.12%(169) 
DLPP dlpp  55.34%(289)  58.84%(127) 
CSLPP  63.56%(145)  71.44%(127) 
CDLPP  67.12%(145)  81.41%(127) 
RCR rcr  65.75%  75.17% 
LRC lrc  48.49%  51.76% 
SRC sparse ^{1}^{1}12  53.00%  72.20% 
CRC crc ^{†}^{†}footnotemark:  54.50%  73.00% 
4.3 Face Recognition
Following the conventional subspace learning based face recognition framework, Nearest Neighbour (NN) Classifier is used for classification and the distance metric is the Euclidean distance. With regard to the choice of weighting schemes for the LPP algorithms, we follow the experimental configuration of LPP lpp and apply dotproduct weighting to construct Laplacian matrices for other LPP algorithms. We use the cross validation scheme to evaluate different algorithms on both AR and ORL databases, since the sample number of each subject is the same on these two databases. The fold cross validation is defined as follows: the dataset is averagely divided into parts, 1 parts are used for training and the remainder is used for testing. With regard to LFWa database, we cannot directly use the cross validation since the subjects of LFWa database have different sample numbers. Therefore, we follow the experimental way of paper rcr and divide the LFWa database into two subsets. The first subset (147 subjects, 1100 samples) is constructed by the subjects whose sample numbers are ranged from 5 to 10 and the second subset (127 subjects, 2891 samples) is constructed by the subjects whose sample numbers are all over 11. In the experiments, the first five samples of each subject in the first subset will be used for training and the rest samples will be used for testing. Similarly, the first ten samples of each subject in the second subset will be used for training and the remainders are used for testing.
From the observations of Table 1, Table 2 and Table 3, CSLPP outperforms all the compared subspace learning algorithms and CDLPP outperforms all the compared face recognition approaches on all databases. For example, CDLPP obtains absolute improvements around 2% and 6% in comparison with the second best face recognition approach in ORL database and the second subset of LFWA database respectively. The results of face recognition experiments also show that the CDLPP presents a prominent improvement over DLPP. More specifically, the gains of CDLPP over DLPP are around 2% and around 5% on ORL and AR databases respectively. On LFWA database, which is a more challenging database, CDLPP performs even better. The gains of CDLPP over DLPP are 15% and 23% in the first subset and the second subset respectively. Moreover, CSLPP also defeats DLPP in all experiments and this verifies that the betweenclass scatter matrix has a better class scattering ability than the Laplacian matrix of classes.
Recently, the linear regression based face recognition approaches are very popular and generally considered as a more advanced face recognition approach than the conventional Nearest Neighbour Classifier based Subspace learning approach. However, another interesting point learned from our experimental results is that the proposed subspace learning algorithm, CDLPP, consistently outperforms LRC, CRC, SRC and RCR, which are four recent representative algorithms of linear regression based face recognition approach. For instance, CDLPP obtains 6% , 8%, 9% and 30% more recognition accuracies than RCR, CRC, SRC and LRC respectively in the second subset of LFWA database. Therefore we believe such phenomenon demonstrates that the conventional subspace learning algorithms may still have the potential to outperform other categories of face recognition algorithms, such linear regression based face recognition approach.
In order to show the spareness of the bases of CDLPP and the collaborations between elements of CDLPP base, several experiments are conducted on ORL database. We draw the absolute value of first base of CDLPP, CSLPP and DLPP, , from top to bottom in Figure 3. Clearly, among the three bases, the base of CDLPP is the most sparse one, which verifies the imposed constraint is a relaxed sparse constraint. Moreover, we also visualize the first five bases of CDLPP, CSLPP and DLPP from top to bottom. The brighter part of visualized bases are the elements of base owns a greater magnitude. Comparing with CSLPP and DLPP, such brighter elements of CDLPP always group together to present a facial component. For example, we can clearly find the brightest part is the hair of human in the first visualized base of CDLPP. Such phenomenon verifies that the fact of the dimensions exist collaboration.
4.4 Dimensionality Reduction
In this section, some experiments are conducted to the dimensionality reduction abilities of different subspace learning algorithms. According to the experimental results in Figure 4, we find that CDLPP consistently outperforms all the subspace learning algorithms cross all the dimensions with a distinct advantage on all databases and the proposed algorithm, CSLPP, also gets a second top recognition accuracy among the subspace learning algorithms. Moreover, CDLPP is a more robust subspace learning algorithm. The recognition accuracy of CDLPP almost not drop down along with the dimension increasing after it reached the top. Actually, this is a very desirable property, since it can facilitate the determination of the optimal number of the retained bases.
4.5 Training Efficiency
We examine the training costs of CSLPP and CDLPP, and compare it with LDA, PCA, NPE, LPP and DLPP. The experimental hardware configuration is CPU: 2.2 GHz, RAM: 2G. Table 4 shows the CPU times spent on the training phases by these linear methods using MATLAB. In this experiment, we select five samples of each subject for training. According to the experimental results of Table 4, CSLPP has a similar training time of the LPP and the training time of CDLPP is the twice of the training time of LPP. Moreover, the training time of all methods on ORL database is almost identical. This is because the size of ORL database is too small and the time of program loading accounts for a great proportion.
Methods  Training Efficiency (Seconds)  

AR  ORL  LFWA set1  LFWA set2  
PCA pca  2.7768  0.1560  5.3664  21.7621 
LDA lda  1.8252  0.1404  3.8064  15.3349 
NPE npe  3.7752  0.2964  8.6425  35.3498 
LPP lpp  5.5692  0.3432  10.0777  41.3403 
DLPP dlpp  8.5489  0.4212  14.6017  63.1336 
CSLPP  5.5068  0.2340  10.7641  45.9267 
CDLPP  10.1869  0.3588  19.6717  92.2746 
4.6 Parameter Selection of CDLPP
is an important parameter to control the amount of additional collaborations. Figure 5 depicts the effect of to the recognition performance of CDLPP. The curves plot the relationship between recognition rate and on ORL, AR and LFWA databases. According to the observations of Figure 5, we can know that the recognition accuracies are slowly increasing along with adding more collaborations in the beginning. While, after it reaches the top, the accuracies are decreasing dramatically. This phenomenon verifies that the moderate collaborations of dimension can offer a significant contribution to improve the discriminating power of DLPP while the overmuch collaborations can degrade the model (Equation 6) into the minimizing of the norm of the projections, which is meaningless. Another phenomenon is that a large database seems can be better benefitted by the collaborations of dimensions. This is very desirable for a particular application, since the amount of samples in the particular application is always very large.
According to the results of these experiments, we let in the experiments using ORL database and the first subset of LFWA database and let in the experiments using AR database and the second subset of LFWA database.
5 Conclusion
In this paper, we present a novel DLPP algorithm name Collaborative Discriminant Locality Preserving projections (CDLPP) and apply it to face recognition. In this algorithm, we use the betweenclass scatter matrix to replace the original denominator of DLPP to guarantee the global optimum of classes scattering. Motivated by the idea of collaborative representation, a norm constraint is imposed to the projection as a collaborations constraint for improving the quality of bases. Three popular face databases, including ORL, AR and LFWA databases, are employed for testing the proposed algorithms. CDLPP outperforms all the compared stateoftheart face recognition approaches. Our next work may focus on utilizing the collaborations of dimensions to solve the feature selection and image segmentation tasks.
Acknowledgement
This work has been supported by the Fundamental Research Funds for the Central Universities (No. CDJXS11181162 and CDJZR12098801) and the authors would thank Dr. Lin Zhong and Dr. Amr bakry for their useful suggestions.
References
 [1] Matthew Turk and Alex Pentland. Eigenfaces for recognition. Journal of cognitive neuroscience, 3(1):71–86, 1991.
 [2] Hui Zou, Trevor Hastie, and Robert Tibshirani. Sparse principal component analysis. Journal of computational and graphical statistics, 15(2):265–286, 2006.
 [3] Ron Zass and Amnon Shashua. Nonnegative sparse pca. Advances in Neural Information Processing Systems (NIPS), 19:1561, 2007.
 [4] Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711–720, 1997.
 [5] Daniel D. Lee and H. Sebastian Seung. Learning the parts of objects by nonnegative matrix factorization. Nature, 401:788–791, 1999.
 [6] Deng Cai, Xiaofei He, Jiawei Han, and Thomas S. Huang. Graph regularized nonnegative matrix factorization for data representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8):1548–1560, 2011.

[7]
Marian Stewart Bartlett, Javier R Movellan, and Terrence J Sejnowski.
Face recognition by independent component analysis.
IEEE Transactions on Neural Networks
, 13(6):1450–1464, 2002.  [8] Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.
 [9] J. B. Tenenbaum. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, December 2000.
 [10] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003.
 [11] Xiaofei He, Shuicheng Yan, Yuxiao Hu, Partha Niyogi, and Hong J Zhang. Face recognition using laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3):328–340, 2005.
 [12] Weiwei Yu, Xiaolong Teng, and Chongqing Liu. Face recognition using discriminant locality preserving projections. Image and Vision computing, 24(3):239–248, 2006.
 [13] Chong Lu, Xiaodong Liu, and Wanquan Liu. Face recognition based on two dimensional locality preserving projections in frequency domain. Neurocomputing, 98:135–142, 2012.
 [14] SiBao Chen, Haifeng Zhao, Min Kong, and Bin Luo. 2dlpp: A twodimensional extension of locality preserving projections. Neurocomputing, 70(46):912–921, 2007.
 [15] Deng Cai, Xiaofei He, Jiawei Han, and HongJiang Zhang. Orthogonal laplacianfaces for face recognition. IEEE Transactions on Image Processing, 15(11):3608–3614, 2006.
 [16] Jiwen Lu and YapPeng Tan. Regularized locality preserving projections and its extensions for face recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 40(3):958–963, 2010.
 [17] Lishan Qiao, Songcan Chen, and Xiaoyang Tan. Sparsity preserving projections with applications to face recognition. Pattern Recognition, 43(1):331–341, 2010.
 [18] Lei Zhang, Meng Yang, and Xiangchu Feng. Sparse representation or collaborative representation: Which helps face recognition? In International Conference on Computer Vision (ICCV), pages 471–478, 2011.
 [19] Meng Yang, Lei Zhang, David Zhang, and Shenlong Wang. Relaxed collaborative representation for pattern classification. In Computer Vision and Pattern Recognition (CVPR), pages 2224–2231, 2012.
 [20] Imran Naseem, Roberto Togneri, and Mohammed Bennamoun. Linear regression for face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(11):2106–2112, 2010.
 [21] John Wright, Allen Y Yang, Arvind Ganesh, S Shankar Sastry, and Yi Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210–227, 2009.
 [22] Ron Rubinstein, Alfred M Bruckstein, and Michael Elad. Dictionaries for sparse representation modeling. Proceedings of the IEEE, 98(6):1045–1057, 2010.
 [23] Peng Ma, Dan Yang, Yongxin Ge, Xiaohong Zhang, Ying Qu, Sheng Huang, and Jiwen Lu. Robust face recognition via gradientbased sparse representation. Journal of Electronic Imaging, 22(1):013018–013018, 2013.
 [24] Yale Face datadase. http://cvc.yale.edu/projects/yalefaces/yalefaces.html.
 [25] Aleix Martínez and Robert Benavente. The ar face database, Jun 1998.
 [26] Ferdinando S Samaria and Andy C Harter. Parameterisation of a stochastic model for human face identification. In Proceedings of the Second IEEE Workshop on Applications of Computer Vision, pages 138–142, 1994.
 [27] Lior Wolf, Tal Hassner, and Yaniv Taigman. Similarity scores based on background samples. In Asian Conference on Computer Vision (ACCV), pages 88–97, 2009.
 [28] Timo Ahonen, Abdenour Hadid, and Matti Pietikainen. Face description with local binary patterns: Application to face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12):2037–2041, 2006.
 [29] Xiaofei He, Deng Cai, Shuicheng Yan, and HongJiang Zhang. Neighborhood preserving embedding. In Internation Conference on Computer Vision (ICCV), volume 2, pages 1208–1213, 2005.
 [30] Deng Cai. The codes of pca, lda, lpp and npe: http://www.cad.zju.edu.cn/home/dengcai/data/dimensionreduction.html.
Comments
There are no comments yet.