1 Introduction
Face recognition is considered as one of the most challenging tasks in computer vision, and has been experienced a vivid enthusiasm during the past decades. Due to various facial expressions, poses and illuminations, it is a challenging problem to extract effective and discriminative features from faces for recognition. To solve this issue, many classical approaches have been proposed. Among them, the most popular kind of face recognition method may be the appearancebased approach. These methods can be divided into two categories,
nonlinear models and linear models. The representative linear methods include Principal Component Analysis (PCA)
pca , Linear Discriminant Analysis (LDA) lda , Nonnegative Matrix Factorization (NMF) nmf , Locality Preservation Projection (LPP) lpp ; lap , and their twodimensional extensions 2dpca ; 2dlda ; 2dlpp ; 2ddlpp . While the representative nonlinear methods include Isomap isomap , locally linear embedding (LLE) lle , and kernel methods kernel ; kernels ; klda . Generally speaking, Linear approaches are simpler and faster than nonlinear approaches. In the following paragraphs, we briefly review the representative linear approaches.Among all the linear appearance based methods, Principle Component Analysis (PCA) pca may be the most classical unsupervised method. It aims at finding a linear projection for preserving the global variability of data via maximizing the variation of the projected samples. PCA is a efficient method for dimension reduction. However it ignores the classspecific information which is suitable for classification. To solve this problem, many researchers try to develop different kinds of algorithms to combine the classspecific information with PCA cipca ; dbs .
As another classical linear method, Linear Discriminant Analysis (LDA) lda discriminates the data by maximizing the betweenclass scatter matrix and minimizing the withinclass scatter matrix simultaneously. Thus, the homogeneous data points can be projected much closer while the inhomogeneous data points will be projected further. It is generally considered that LDA is superior to PCA for classification. However, LDA suffer the Small Size Sample (SSS) problem. To alleviate this problem, extensive approaches have been proposed in the literatures ssslda ; mmm ; mms . However, the fundamental limitation still remains unsolved in theory.
Nonnegative Matrix Factorization (NMF) nmf , is a recent approach for extracting a partbased linear representation, which has received great attention and has been widely applied in face recognition area. NMF attempts to decompose a large nonnegative matrix into the product of two small nonnegative matrices and produces a partbased representation since only additive combination of basis is allowed. Some previous studies have indicated that the mechanism of NMF is very similar to the visual perception mechanism of human brain nmf . NMFbased methods are developed rapidly tnmf ; onmf ; nmff ; gnmf . However, training these NMFbased methods is computationally expensive compared with other linear methods.
Some studies show that the face images reside on a nonlinear submanifold isomap ; lle . These studies boost many manifold learning methods for solving face recognition and other computer vision tasks lpp ; lap ; lape ; nmf ; ge ; rlpda ; mmd . Among these manifold learning algorithms, Locality Preservation Projection (LPP) may be the most influential manifold learning algorithm for face recognition and dimensionality reduction. LPP provides a way of projection via constructing an adjacency weighting matrix of data for preserving local manifold structures. Since the objective function is linear, it can be efficiently computed. Although LPP has been applied in many domains and achieves promising results, it seems to still have potential to improve its classification performance. In the recent decade, many researchers have tried to improve LPP from different aspects, such as Discriminant LPP (DLPP) dlpp , Orthogonal LPP (OLPP) olpp , Parametric Regularized LPP (PRLPP) rlpp and their extensions dlppm ; 2ddlpp ; odlpp ; udlpp . More specifically, DLPP uses a similar approach as LDA and emphasizes preserving the local manifold structures of homogeneous data and scattering the inhomogeneous data simultaneously. On the other hand, just like LDA, DLPP also suffers the SSS problem. PRLPP regulates the LPP space in a parametric manner and extract useful discriminant information from the whole feature space rather than a reduced projection subspace of PCA. Furthermore, this parametric regularization can also be employed to other LPP based methods, such as PRDLPP, PRODLPP rlpp . Similar to PRLPP, OLLP add an orthogonal constraint to the projection of LPP which can also be flexibly combined with other LPP methods. Different from these methods, we will try to improve LPP from its essential idea .
LPP assumes that there exist many lowdimensional local manifolds of samples residing on the original data space. LPP intends to learn a subspace, to preserve these local manifold structures, via constructing a adjacency weight matrix which encodes the geometric information of data. This adjacency weight matrix which regarded as graph laplacian in spectral graph theory spectral , is a discrete approximation to the LaplaceBeltrami operator on the manifold thesis
. Thus, the construction of this adjacency weight matrix directly determines the local manifold structure extraction. For face recognition, LPP is supervised and the entries of the weighting matrix are only determined by the distances between each two homogenous points. Therefore, LPP can only extract the local manifolds depicted by some withinclass variances such as expressions and poses. Apparently, it ignores some more global variances between different persons such as facial shapes, genders, races, and face configurations, since these factors are almost invariant to the same person and corresponding to their underlying labels. We believe these kinds of information can benefit the face recognition and a natural assumption can be given that there also exist another kind of local manifold structures related to these underlying personinvariant factors. These factors are much more global than the withinclass factors considered by LPP. The original space is the hybrid result of these two kinds of manifolds. Therefore, it is meaningful to learn such kind of subspace which can preserve both these two kinds of manifold structures. In this paper, we try to propose a novel method named
GlobalityLocality Preserving Projection (GLPP) to address this issue. Our main contributions include:
We propose a LPP based method to preserve the manifold structures related to both withinclass variances and the personinvariant variances and attaining a more effective subspace which obtains much more classification ability than LPP in both controlled and uncontrolled environments.

We formulate a 2D version of GLPP as an example to show how to combine other techniques with GLPP to develop a new GLPPbased algorithms.
The rest of paper is organized as follows: Section 2 reviews the LPP and DLPP; Our motivation and the algorithm of GLPP and its 2D extension are described in Section 3; In section 4, several experiments are designed to demonstrate the robustness and effectiveness of GLPP; Finally, conclusion is summarized in Section 5.
2 Related Work
2.1 Locality Preserving Projections (LPP)
LPP is a linear method for face recognition and dimensionality reduction proposed by He et al lpp ; lap . In this section, we will briefly describe the model of LPP.
Given the sample set . LPP aims at learning a projection such that it can translate the original sample space into a subspace which can well preserve the local manifold structures of data. This optimal projection can be solved by minimizing the following objective function
(1) 
where the matrix is an adjacency weight matrix and is used to measure the closeness of two points and . The objective function with the choice of will result in a heavy penalty if neighboring points and are mapped far apart. Therefore, the projection ensures that if samples and are close then their projected samples and are close as well. If LPP is used for recognition problem, it will be adopted a supervised way to construct the objective function and it can be written as follow
(2) 
where the matrix is the adjacency matrix of the samples belonging to class . Generally speaking, there are three possible ways to define the adjacency matrices or :

Dotproduct weighting: If nodes and are connected, put . Note that if
is normalized to 1, this measurement is equivalent to the cosine similarity measure.

Heat Kernel Weighting: If nodes and are connected, put . Heat Kernel has an intrinsic connection to the Laplace Beltrami operator on differentiable functions on a manifold gnmf .

0  1 Weighting: put if nodes and are connected by an edge otherwise . This is the simplest way to assign weights.
Different similarity measurements are suitable for different situations.
The objective function of LPP can be derived as:
(3)  
where is a diagonal matrix and its entries are column (or row, since is symmetric) sum of , . Thus, is a Laplacian matrix. In the supervised case, matrix ( can also be similarly denoted) is denoted as follow
(4) 
where matrix denote the th class adjacency matrix. Furthermore, there is a constraint imposed in LPP as follows
(5) 
Finally, this problem is reduced to find:
(6) 
The linear projection
that minimizes the objective functions is given by the minimum eigenvalues solution to the generalized eigenvalues problem:
(7) 
Since the matrices and are all symmetric and positive semidefinite, the projection
which minimizes the objective function can be obtained by minimum eigenvalues solutions of the generalized eigenvalues problem. Let the column vectors
to be the solutions of Equation 7 and corresponding to the first minimum eigenvalues . Thus, the embedding is as follows(8) 
where is a dimensional projected feature vector, and is the optimized projection matrix. After obtaining the optimized projection matrix , the samples can be projected via and get a much lower dimensional representation.
2.2 Discriminant Locality Preserving Projections (DLPP)
DLPP dlpp is an extension of LPP borrows the idea from LDA for incorporating the discriminative information. It aims at preserving the local manifold structures of the homogenous data and scattering the adjacent classes simultaneously. The whole objective of DLPP is similar to LDA’s objective function:
(9) 
where denotes the mean sample of to the class and matrix denotes the adjacency weight matrix of the mean samples. The definitions of the other notations are same to previous section. It is obvious that the numerator is exactly the original objective function of LPP and the denominator is the mean sample version of LPP’s objective function. Based on Equations 3 and 2, the numerator of DLPP’s objective function can be represented as
(10) 
Similarly, the denominator can be reduced as follow
(11)  
The matrix denotes the mean space of samples where is the mean sample of the class . And similar to the matrix , is also a Laplacian matrix for measuring the weight of any two mean samples. Substituting the Equations 10 and 11 in Equation 9, the objective function of DLPP can be denoted as follow
(12) 
So, the DLPP subspace which is spanned by a set of projection vectors can be obtained by solving a programming problem:
(13) 
Same as LDA and LPP, this programming problem can be translated as an eigenvalue problem. Then a set of projection vectors can be achieved.
3 Algorithm of GlobalityLocality Preserving Projections
3.1 Motivation
LPP is known as an efficient method for local manifold structure extraction. Many studies proved that it can perform well to the recognition problem. In this section, we focus on further improving its recognition performance. Before we introduce our improvement of LPP, several important questions should be figured out:

How does LPP preserve the local manifold structure?

What kinds of local manifold structures does LPP extract in the face recognition and they are related to what kinds of information?

Does any other additional manifold structures reside on the sample space and can they benefit the recognition?
LPP extracts the local manifold structures via providing weights based on distance between two points. The points close to each other have larger weights than the points apart. In other words, this strategy makes a very high penalty when the relative close points project far away in a learned subspace and this originates from manifold assumption lape . Thus, this strategy allows LPP to keep the relative geometric distances between adjacent points in a learned subspace. And this geometric relationship of local adjacent points is socalled manifold (geometric) structure. LPP weights the data by the adjacency weight matrix. So, the capability and the category of the local manifold structure extraction are directly related to the adjacency weight matrix. For face recognition, LPP is supervised and the entries (weights) of adjacent weight matrix are determined by the distances between each two homogeneous samples. Therefore, LPP only extracts the local manifold structures related to the withinclass variances (such as expressions and poses). More specifically, we take the heat kernel weighting as an example, the weight of the distance between homogeneous points and can computed as where defines the class label and is positive constant. So, the weight is only determined by the term . Let be the mean sample of class . Thus, the term can transform into item and it exactly measures the withinclass variance. Based on the previous inferences, the mean sample is not considered in conventional LPP. This is because the mean sample which contains many meaningful information are constant to the homogenous samples, for example, the factors of facial shape, facial component shapes and skin colors which are always related to the underlying labels such as gender, races and ages, are all almost invariant to the same person. However, these personinvariant factors, which cannot be dealt by conventional LPP, can benefit the recognition. And we believe there are some manifolds corresponding to these factors. Because, there exists some natural clusters in the class level. For example, the Asians and Caucasians can be easily distinguished by skin color and the shapes of facial components. Intuitively, the Asian faces and the Caucasians faces must fall into different clusters in the whole face space. As shown in Figure 1, the mean faces with glasses clearly cluster together in a 2Dsubspace learned by minimizing the globality preserving objective term of GLPP. This phenomenon also validates our assumption.
We intend to extract the manifold structure corresponding to the personinvariant factors via adding an additional objective term for constraining the LPP model to take these into consideration. This additional objective term is based on the mean sample of each class, since only the mean part is invariant to the specific person. We follow the same rule to construct its adjacency weight matrix which will be used for weighting the distance of each two mean samples. Its matrix form can expressed as follow:
(14) 
We name this objective term Globality Preserving Objective Term and its matrix form Globality Preserving Matrix for differing to the locality preserving objective term (the original objective function of the supervised LPP,for convenience, we term it Locality Preserving Matrix in this paper). One key point must be clarified that the globality preserving objective term actually extract the local manifold structures in the class level and it will degenerate to the unsupervised LPP when each subject only has one sample. We use the word globality to term it, because the local manifold structures extracted by it are much more global than the ones in withinclass level extracted by the locality preserving objective term. The reason why the manifolds either extracted by locality preserving objective term or gloablity preserving objective term are both local is that the weighting mechanism of LPP is nonlinear (see Figure 2). The weights drop sharply along with the distances (either cosine distance or Euclidean distance) increasing. Therefore, the remote points cannot effectually affect the subspace learning while the points in the local manifold play a leading role.
The globality preserving objective term is equivalent to the denominator of DLPP. The basic idea of DLPP is to scatter the nearby classes via maximizing this term and it seems to be very plausible. But, Can it really provide a good scattering of classes? Actually, according to the interpretation from previous paragraph, this term must be localityfocused since the weighting mechanism. Maximizing this term can indeed project two local nearby classes far away but it may also lead to two remote classes project much more closer. We also conduct experiments on Yale database to illustrate the class scattering ability of DLPP. See Figure 3, this figure illustrates the class scattering abilities of DLPP and LDA. The right subfigure (the red points) illustrates the class scattering via maximizing the denominator of DLPP (the globality preserving matrix) and the left one illustrates the class scattering via maximizing the denominator of LDA (the betweenclass scatter matrix). It is clearly that the scattering performance of DLPP is not good in comparison with the classical LDA. In brief, the class scattering ability of DLPP is still questionable and DLPP breaks the natural manifold structures of the personinvariant factors in the class level.
3.2 GlobalityLocality Preserving Projections (GLPP)
We aims at improving the recognition performance of LPP via extracting much more meaningful information from data. An additional objective term called Globality Preserving Objective Term is added to the original objective function of LPP for preserving the local manifold structures corresponding to the personinvariant factors in the class level. Intuitively, this new LPP method is named as Globalitylocality Preserving Projection (GLPP).
Before we formally introduce GLPP, we should firstly define some notations. Same as the previous section, we assume matrix as the original sample space and the class label library as vector . Matrix is assumed to denote the subset belonging to class . The matrix denotes the mean space of samples where is the mean sample of the class . Matrix denotes the projected mean sample space via projecting the original mean sample space into the optimal subspace. Similarly, the projected sample space is denoted as . Our job is to find a projection matrix which maps the dimensional original sample space to a dimensional subspace preserving both global and local geometric structures preferably preserved subspace.
Here we give the objective function of GLPP:
(15) 
where the objective term denotes the gloablity preserving objective term and the objective term denotes the locality preserving objective term (the original objective function of LPP). The parameter is used for balancing and . A greater value of indicates the model pays much more attention to preserving the local manifold structures. We set based on a intuitive and natural assumption that the betweenclass variance is much greater than the withinclass variance for the classification problem. These two terms are defined as follows
(16) 
(17) 
substitute these two equations into Equation 15, then the objective can be formulated as
(18) 
Where matrices and is the adjacency weight matrices of the objective terms and respectively. In this paper, we choose the dotproduct weighting to construct each adjacency matrix.
Finally, Equation 18 can be manipulated by some simple algebraic steps as:
(19)  
Where and are the Laplacian matrices and is a positive semidefinite matrix. Therefore, this problem as follow
(20) 
can be transformed into a generalized eigenvalue problem (Its solving process can refer to the solving process of LPP in previous section) denoted as follow
(21) 
The first best projections are corresponding to the first minimum nonzero eigenvalues . Thus we can finally obtain the GLPP projection matrix
. Then we can project the data into optimal subspace via GLPP projection and employ different classifiers for classification.
3.3 A TwoDimensional Extension of GLPP (2DGLPP)
In this section, we will present an algorithm termed TwoDimensional GlobalityLocality Preserving Projection (2DGLPP) as an example to show how to combine other techniques with GLPP to develop the new algorithm.
2DGLPP considers the input data as an image matrix instead of a vector. Let us consider a set of images taken from an dimensional image space. For dimensionality reduction, we should design a set of linear projections which map the original image matrix into a dimensional feature space.
(22) 
where is the dimensional projected feature vector and is a linear projection.
Same as GLPP, we can compute the betweenclass and withinclass adjacency matrices. However, we can not employ these Laplacian matrices directly since the input data is two dimensional. To solve this problem, the Laplacian matrices should be transformed as follows
(23) 
The symbol is the Kronecker product of the matrices. Then, the objective function of 2DGLPP can be expressed as follows
(24)  
Where is a matrix generated by arranging all the image matrices ,belong to class , in column. Similarly, is a matrix generated by arranging each class s mean image matrix in column. Same as GLPP, this problem
(25) 
can be also finally solved as generalized eigenvalue problem.
4 Experiments
We evaluate the performance of the proposed GLPP and its 2D extensions on four popular face databases involving both controlled environments and uncontrolled environments. The face databases in controlled environment are ORL, FERET and Yale databases, while LFWa is a face database in uncontrolled environment.
4.1 Experimental Setting
4.1.1 Datasets

The ORL database contains 400 images from 40 subjects (Figure 4) orl . Each subject has ten images acquired at different time. In this database, the subjects have varying facial expressions and facial details. And the images are also taken with a tolerance for some tilting and rotation of the face of up to . For simplicity, we aligned and cropped the face image to size 3232 pixels.

The LFWa database is an automatically aligned grayscale version lfwa of the LFW database lfw which is a database aim at studying the problem of the unconstrained face recognition (Figure 4). This database is considered as one of the most challenging database since it contains 13233 images with great variations in terms of lighting, pose, age, and even image quality. We copped these images to 120120 pixels in the center, and resize them to 6464 pixels.
4.1.2 Compared methods and their source codes
We compare our method with stateoftheart methods including LDA, PCA, LPP, and DLPP. The source codes are downloaded from Prof. Deng Cai’s homepage code .
4.2 Face Recognition
We conducted several experiments to evaluate GLPP and compare it with PCA, LDA, LPP and DLPP in terms of recognition accuracy in controlled and uncontrolled environments. The 2Dextensions of GLPP will also be briefly evaluated in this section via comparing with 2DPCA 2dpca , 2DLDA 2dlda and 2DLPP 2dlpp . We applied the nearest neighbor classifier in the Euclidean space to perform recognition. Dot product weighting is applied for constructing adjacency matrices of LPPbased methods. The recognition accuracy reported in this section is top recognition rate (the number of corrected recognized testing samples divided by the number of total testing samples). In these experiments, four crossvalidation schemes include leaveoneout scheme, singlesample scheme, twofold scheme, fivefold scheme or threefold scheme are applied for each database according to the sample number of subjects to evaluate the performance of GLPP.
4.2.1 Recognition Performance of GLPP in Controlled Environment
Three databases include ORL, Yale and FERET are employed in this experiment. The parameters of DLPP are deferred to the experimental section of dlpp . And the parameter of GLPP is fixed to 10000. We will introduce how to learn
in the next subsection. Before applying the face recognition methods to the databases, PCA is utilized to reduce the redundant information of data and only preserves the dimensions corresponding to nonzero eigvalues (PCAratio=1). Average Recognition Accuracy (ARA) and Standard Deviation (STD) are used to measure the recognition performance and robustness respectively.
Methods  CrossValidation SchemesRecognition Rate (ARASTD)  
Leaveoneout  Fivefold  Twofold  Single samples  
PCA  94.253.1%  91.253.2%  82.250.4%  51.193.2% 
LDA  99.001.7%  98.001.9%  93.251.8%  47.584.4% 
LPP  98.002.6%  96.751.4%  90.753.9%  54.584.2% 
DLPP  98.252.1%  97.252.1%  93.753.2%  51.193.2% 
GLPP  99.501.1%  98.751.5%  96.001.4%  51.862.5% 
Methods  CrossValidation SchemesRecognition Rate (ARASTD)  
Leaveoneout  Fivefold  Twofold  Single samples  
PCA  89.7918.5%  89.3311.9%  88.672.8%  65.7019.4% 
LDA  96.976.2%  98.003.0%  95.334.7%  67.2717.0% 
LPP  99.392.0%  99.331.5%  96.674.1%  66.6716.7% 
DLPP  99.392.0%  98.671.8%  97.333.8%  65.7019.4% 
GLPP  100.000.0%  100.000.0%  98.671.9%  66.7217.5% 
Methods  CrossValidation SchemesRecognition Rate (ARASTD)  

Leaveoneout  Threefold  Twofold  Single samples  
PCA  87.736.4%  85.188.8%  84.031.0%  52.0011.1% 
LDA  94.444.2%  92.363.2%  90.743.3%  48.1910.9% 
LPP  94.443.9%  92.822.8%  92.822.3%  52.0810.1% 
DLPP  95.144.0%  92.364.2%  90.514.4%  51.9911.1% 
GLPP  96.304.4%  95.143.0%  94.443.9%  53.8410.0% 
Table 1, 2, 3 tabulate the recognition performances of different methods on ORL,Yale and FERET datasets respectively. The proposed GLPP algorithm outperforms other methods under different training sample numbers. Even, in the case of small sample size, our proposed method still can get the second place or even the first place among these five classical algorithms.
4.2.2 Recognition Performance of GLPP in Uncontrolled Environment
In LFWa database, the sample number of every subject is different. The LFWa database is divided into two subsets, each subject in the first subset (1100 images with 147 subjects) contains 610 samples while each subject in the second subset (3658 images with 127 subjects) contains more than 11 samples. We choose the first 5 samples per subject in the first subset as training samples and the rest as testing samples. Similar, the first 10 samples of each second subset’s subject are used as training samples and the remaining are treated as testing samples. The parameter settings are the same as the experiments in controlled environment. Compared to the databases in controlled environment, the images on LFW database are more challenging. Therefore, 59code LBP features lbp are utilized as the baseline features on LFWa database in this experiment. The block size of LBP is 1616 pixels and each block has 50% overlap with adjacent blocks.
According to the observations of Table 4, the experimental result demonstrates that GLPP obtains a significant improvement to LPPfamily algorithms with preserving a bit more dimensions. This is because GLPP not only preserves local geometric structures related to the withinclass variances, but also preserves global geometric structures related to the betweenclass (personinvariant) variances. The extra dimensions are the key to improve the performance. Furthermore, it is clear to see that GLPP get more gain over DLPP and LPP than the experiments in the controlled environment. This is because the LFWa dataset contains more subjects which can help GLPP to more accurately preserve the local manifold structures of personinvariant factors and partly verify the existence of the manifolds related to personinvariant factors. Besides, the results also show that DLPP and LDA do not perform well in uncontrolled environment with comparison of their recognition performances in controlled environment. This indicates that DLPP may incorporate the characteristics of LDA and suffer similar problems of LDA.
Subset  Top Recognition Rate (Dimension)  

PCA  LDA  LPP  DLPP  GLPP  
First set  27.42%(625)  54.58%(124)  58.29%(144)  52.27%(131)  63.91%(170) 
Second set  35.62%(556)  56.99%(130)  59.45%(136)  55.34%(196)  65.75%(286) 
4.2.3 Recognition Accuracy versus Dimension
This experiment is conducted in the controlled environment and its experimental configuration is same to the experiment of subsection 4.2.1. In this experiment, first four samples of each subject are used for training and the remaining samples are used for testing. As shown in Figure 5, we plot the relationship of recognition accuracy and dimension. Based on the experimental results, we can see that LPP (the blue curve) obains better recognition accuracy in a relatively low dimensional space. But, GLPP (the red curve) soon outperforms LPP as the dimension increases. Moreover, the dimension corresponding to the top recognition rate of GLPP is still at a low level and acceptable for practical application.
4.2.4 Training Time of GLPP
We examined the training cost of GLPP and compared it with LDA, PCA and LPP. The experimental hardware configuration is CPU: 2.2 GHz, RAM: 2G. Table 5 shows the CPU time spent on the training phases by these linear methods using MATLAB. In this experiment, we select five samples of each subject for training. According to the experimental results of Table 5, the proposed GLPP has a similar training time of the LPP.
Dataset  Methods (Seconds)  

PCA  LDA  LPP  GLPP  
Yale  0.1248  0.1092  0.2652  0.2496 
ORL  0.1404  0.1248  0.2496  0.2496 
FERET  2.0592  1.0452  3.3696  3.6660 
4.2.5 Recognition Performance of 2DGLPP
This subsection is a brief introduction of the experiment of 2DGLPP compared with 2DLDA 2dlda , 2DPCA 2dpca , 2DLPP 2dlpp
. Linear regression classifier is used as classifier. Three crossvalidations are applied to the experiment on Yale database.
The results from Table 6 indicate 2DGLPP perform better than other three compared methods with a smaller standard deviation.
Datasets  2D linear methodsrecognition rate (ARASTD)  

2DPCA  2DLDA  2DLPP  2DGLPP  
Leaveoneout  99.392.0%  95.1510.4%  98.183.1%  99.392.0% 
Fivefold  98.671.8%  94.676.9%  98.671.8%  99.331.5% 
Twofold  96.003.8%  90.675.7%  97.331.9%  99.392.0% 
4.3 Learning the Parameter
The parameter of GLPP plays an important role to trade off between the preservation of local manifolds related to withinclass factors and the preservation of the local manifolds related to personinvariant factors. Therefore, it is very important to find the optimal value of to maximize the performance of GLPP. For obtaining this value, two experiments are applied to learn the relationship between and recognition performance.
Figure 6 plots the curve for describing the relationships between dimension and recognition rate under different where the X axis indicates the preserved dimension and the Y axis indicates the recognition rate. Figure 7 illustrates the influence of to the top recognition rate where X axis indicates and Y axis indicates the top recognition rate. Based on these curves, we can learn that GLPP achieves the best performance when is greater than 1000 and get the poor results when it is smaller than 1 which verifies our assumption in section 3.2. Besides, the recognition accuracy is insensitive to when is greater than 1000. Thus we suggest that a number greater than 1000 can be assigned to for achieving good performance.
4.4 Discussion
The following observations can be made from the experimental results listed in Tables 16 and Figures 57.
The proposed GLPP outperforms LDA, PCA, LPP and DLPP in both controlled and uncontrolled environments. GLPP is more suitable for extracting more meaningful information from samples and face recognition since it not only preserves the local manifold structures corresponding to withinclass variances like LPP, but also can extract the personinvariant (betweenclass) features like PCA, and it can be evidenced from the results in Table 14 on the ORL, Yale, FERET and LFWa database. The gains of GLPP over the best recognition accuracy of LPP are 1.5%, 2%, 5.25% under leaveoneout, fivefolds, twofolds crossvalidation schemes respectively on ORL database. The gains of GLPP over LPP are 1.9%, 2.3%, 1.6%, 1.8% under leaveoneout, threefolds, twofolds, one sample crossvalidation schemes respectively on FERET database. Particularly, the improvement of recognition performance in uncontrolled environment is much more remarkable. The gains of GLPP increase to 5.6% and 6.3% on the first subset of LFWa database and the second subset of LFWa database respectively. The reason why GLPP gets more gains over LPP in uncontrolled environment is that, GLPP can better describe and preserve the local manifold structures of personinvariant factors since the LFWa database has more subjects.
GLPP can be extended to other manifold learning methods as LPP. In this paper, we proposed 2DGLPP as the instance to show how to combine other techniques with GLPP to develop new algorithm. Moreover, this 2Dextension can achieve better performance compared with other classical 2D linear face recognition method includes 2DPCA, 2DLDA and 2DLPP according to the experimental result in Table 6.
The experimental results from section 4.3 demonstrate that GLPP can achieve the best performance and its performance is not insensitive to when it is greater than 1000. So, can be fixed which can make parameter space smaller.
5 Conclusion
We have proposed new linear projection method for face recognition. The proposed method is designed to refine the original objective of LPP into two parts and exploit more meaningful information from samples, resulting in better recognition performance than LPP. Moreover, our proposed method appears to the first LPP based algorithm that formally incorporates the features related to invariantperson factors via simultaneously preserving both local manifold structures of withinclass factors and invariantperson factors. Furthermore, our proposed GLPP can be extended to other manifold learning methods, for instance, 2DGLPP, and similar performance improvement has been obtained. According to the development of 2DGLPP, the proposed method is also likely to be extended to other statistical techniques such as maximum margin criterion dlppm , orthogonal basis constraint olpp ; odlpp and parametric regularization technique rlpp , which will be explored in our future work.
Acknowledgement
This work described in this paper was partially supported by National Natural Science Foundations of China (NO. 0975015 and 61173131), Fundamental Research Funds for the Central Universities (No. CDJXS11181162), Key Science and Technology Project of Chongqing (No. CSTC2009AB2230). And the authors would like to thank the instructive suggestion from Prof. Ahmed Elgammal and the comments from anonymous reviewers and editors.
References
 (1) M. Turk, A. Pentland, Eigenfaces for recognition, J. Cognitive Neuroscience 3 (1) (1991) 71–86.
 (2) P. N. Belhumeur, P. Hespanha, D. J. Kriegman, Eigenfaces vs. fisherfaces: Recognition using class specific linear projection, IEEE Transactions on Pattern Analaysis and Machine Intelligence (1997) 711–720.
 (3) D. D. Lee, H. S. Seung, Learning the parts of objects by nonnegative matrix factorization, Nature 401 (1999) 788–791.
 (4) X. He, P. Niyogi, Locality preserving projections, in: NIPS, MIT Press, 2003.
 (5) X. He, S. Yan, Y. Hu, P. Niyogi, H. jiang Zhang, Face recognition using laplacianfaces, IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (2005) 328–340.
 (6) J. Yang, D. Zhang, A. Frangi, J.Y. Yang, Twodimensional pca: A new approach to appearancebased face representation and recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence 26 (1) (2004) 131–137.

(7)
M. Li, B. Yuan, 2dlda: A statistical linear discriminant analysis for image matrix, Pattern Recognition Letters 26 (5) (2005) 527–532.
 (8) S. Chen, H. Zhao, M. Kong, B. Luo, 2dlpp: A twodimensional extension of locality preserving projections, Neurocomputing 70 (46) (2007) 912–921.
 (9) Y. Weiwei, Twodimensional discriminant locality preserving projections for face recognition, Pattern Recognition Letters 30 (15) (2009) 1378–1383.
 (10) J. B. Tenenbaum, A global geometric framework for nonlinear dimensionality reduction, Science 290 (2000) 2319–2323.
 (11) S. T. Roweis, L. K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science 290 (2000) 2323–2326.
 (12) S. K. Zhou, R. Chellappa, B. Moghaddam, Intrapersonal kernel space for face recognition, in: FG, 2004, pp. 235–240.
 (13) M.H. Yang, Face recognition using kernel methods, in: NIPS, MIT Press, 2001, pp. 1457–1464.

(14)
J. Lu, K. Plataniotis, A. Venetsanopoulos, Face recognition using kernel direct discriminant analysis algorithms, IEEE Transactions on Neural Networks 14 (1) (2003) 117–126.
 (15) S. Chen, T. Sun, Classinformationincorporated principal component analysis, Neurocomputing 69 (13) (2005) 216–223.
 (16) K. Das, Z. Nenadic, An efficient discriminantbased solution for small sample size problem, Pattern Recognition 42 (5) (2009) 857–866.
 (17) L.F. Chen, H.Y. M. Liao, M.T. Ko, J.C. Lin, G.J. Yu, A new ldabased face recognition system which can solve the small sample size problem, Pattern Recognition 33 (10) (2000) 1713–1726.
 (18) H. Li, T. Jiang, K. Zhang, Efficient and robust feature extraction by maximum margin criterion, IEEE Transactions on Neural Networks 17 (1) (2006) 157–165.
 (19) F. Song, D. Zhang, D. Mei, Z. Guo, A multiple maximum scatter difference discriminant criterion for facial feature extraction, IEEE Transactions on Systems, Man, and Cybernetics, Part B 37 (6) (2007) 1599–1606.
 (20) T. Zhang, B. Fang, Y. Y. Tang, G. He, J. Wen, Topology preserving nonnegative matrix factorization for face recognition, IEEE Transactions on Image Processing 17 (4) (2008) 574–584.
 (21) Z. Li, X. Wu, H. Peng, Nonnegative matrix factorization on orthogonal subspace, Pattern Recognition Letters 31 (9) (2010) 905–911.
 (22) D. Guillamet, J. Vitri , Nonnegative matrix factorization for face recognition 2504 (2002) 336–344.
 (23) D. Cai, X. He, J. Han, T. S. Huang, Graph regularized nonnegative matrix factorization for data representation, IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (8) (2011) 1548–1560.
 (24) M. Belkin, P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, in: NIPS, Vol. 14, 2001, pp. 585–591.
 (25) S. Yan, D. Xu, B. Zhang, H. Zhang, Graph embedding: A general framework for dimensionality reduction, in: CVPR, 2005, pp. 830–837.
 (26) X. Gu, W. Gong, L. Yang, Regularized locality preserving discriminant analysis for face recognition, Neurocomputing 74 (17) (2011) 3036–3042.
 (27) R. Wang, S. Shan, X. Chen, Q. Dai, W. Gao, Manifoldmanifold distance and its application to face recognition with image sets, IEEE Transactions on Image Processing 21 (10) (2012) 4466–4479.
 (28) W. Yu, X. Teng, C. Liu, Face recognition using discriminant locality preserving projections, Image and Vision Computing 24 (3) (2006) 239–248.
 (29) D. Cai, X. He, J. Han, H. Zhang, Orthogonal laplacianfaces for face recognition, IEEE Transactions on Image Processing 15 (11) (2006) 3608–3614.
 (30) J. Lu, Y.P. Tan, Regularized locality preserving projections and its extensions for face recognition, IEEE Transactions on Systems, Man, and Cybernetics, Part B 40 (3) (2010) 958–963.
 (31) G.F. Lu, Z. Lin, Z. Jin, Face recognition using discriminant locality preserving projections based on maximum margin criterion, Pattern Recognition 43 (10) (2010) 3572–3579.
 (32) L. Zhu, S. Zhu, Face recognition based on orthogonal discriminant locality preserving projections, Neurocomputing 70 (79) (2007) 1543–1546.
 (33) X. Yu, X. Wang, Uncorrelated discriminant locality preserving projections, IEEE signal processing letter 15 (79) (2008) 361–364.
 (34) F. R. K. Chung, Spectral graph theory, cbms regional conference series in mathematics (1996).
 (35) M. Belkin, Problems of learning on manifolds, phd thesis, university of chicago (2003).
 (36) F. S. Samaria, F. S. Samaria, A. Harter, O. Addenbrooke, Parameterisation of a stochastic model for human face identification (1994).
 (37) P. J. Phillips, H. Wechsler, J. Huang, P. Rauss, The FERET database and evaluation procedure for face recognition algorithms, Image and Vision Computing 16 (5) (1998) 295–306.
 (38) Y. F. datadase, http://cvc.yale.edu/projects/yalefaces/yalefaces.html.
 (39) L. Wolf, T. Hassner, Y. Taigman, Similarity scores based on background samples, in: ACCV, 2009, pp. 88–97.
 (40) G. B. Huang, M. Ramesh, T. Berg, E. Learnedmiller, Labeled faces in the wild: A database for studying face recognition in unconstrained environments (2007).
 (41) D. Cai, http://www.cad.zju.edu.cn/home/dengcai/data/data.html.
 (42) T. Ahonen, A. Hadid, M. Pietikainen, Face description with local binary patterns: Application to face recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence 28 (12) (2006) 2037–2041.
Comments
There are no comments yet.