Feature extraction by dimensionality reduction is a critical step in pattern recognition. Principal component analysis (PCA) is a classic method for dimensionality reduction in the field of face recognition, which was proposed by Turk and Pentland in Ref. Turk and Pentland . Yang et al.  presented two-dimensional PCA (2DPCA) to improve the efficiency of feature extraction, in which image matrices were used directly. Two-dimensional weighted PCA (2DWPCA) was developed in Ref. Nhat and Lee  to improve the performance of 2DPCA. The complete 2DPCA method was presented in Ref. Xu et al.  to reduce the feature coefficients needed for face recognition compared to 2DPCA. In kernel PCA (KPCA) , samples were mapped into a high dimensional and linearly separable kernel space and then PCA was employed for feature extraction. Chen et al.  presented a pattern classification method based on PCA and KPCA (kernel principal component analysis), in which within-class auxiliary training samples were used to improve the performance. Liu et al. 
proposed a 2DECA method, in which features are selected in 2DPCA subspace based on the Renyi entropy contribution instead of cumulative variance contribution. Moreover, some approaches based on linear discriminant analysis (LDA) were explored[8, 12, 13].
Contrast to the above L norm based methods, Kwak  developed L-PCA by using L norm. Ding et al.  proposed a rotational invariant L norm PCA (R-PCA). These none L norm based algorithms are less sensitive to the presence of outliers.
In this paper we propose 2DR-PCA and 2DL-PCA algorithms for face recognition by utilizing the advantages of L
norm method and 2DPCA. Instead of using image vectors in R-PCA and L-PCA, we use image matrices in 2DR-PCA and 2DL-PCA directly for features extraction. Compared to the 1-D methods, the corresponding 2-D methods have two main advantages: higher efficiency and recognition accuracy. We extend R-PCA and L-PCA to their two dimensional case and the 2DR-PCA and 2DL-PCA methods are proposed.
This paper is organized as follows: We give a brief introduction to the R-PCA and L-PCA algorithms in Section 2. In Section 3, the 2DR-PCA and 2DL-PCA algorithms are proposed. In Section 4, the mentioned methods are compared through experiments. Finally, conclusions are drawn in Section 5.
2 Fundamentals of subspace methods based on none L norm
In this paper, we use to denote the training set of 1-D methods, where is a -dimensional vector.
R-PCA algorithm tries to find a subspace by minimizing the following error function
where is the projection matrix, is defined as , and denotes the R norm, which is defined as
In R-PCA algorithm, the training set should be centered, i.e.,, where is the mean vector of , which is given by .
The principal eigenvectors of the R-covariance matrix is the solution to R-PCA algorithm. The weighted version of R-covariance matrix is defined as
The weight has many forms of definitions. For the Cauchy robust function, the weight is
The basic idea of R-PCA is starting with an initial guess and then iterate with the following equations until convergence
The concrete algorithm is given in Algorithm 1.
The L norm is used in L-PCA for minimizing the following error function
where is the projection matrix, is defined as , and denotes the L norm, which is defined as
In order to obtain a subspace with the property of robust to outliers and invariant to rotations, the L norm is adopted to maximize the following equation
It is difficult to solve the multidimensional version. Instead of using projection matrix , a column vector is used in equation (8) and the following equation is obtained
One best feature is extracted by the above algorithm. In order to obtain a dimensional projection matrix instead of a vector, an algorithm based on the greedy search method is given as follows
Apply the L-PCA procedure to to find
3 2dr-PCA and 2DL-PCA algorithms
In 2-D methods, is used to denote the training set, where is a matrix.
In this paper we propose 2DR-PCA algorithm, in which we iterate the projection matrix with an initial matrix until convergence.
First, the training set is centered, i.e., , where is the mean matrix of , defined as .
The R covariance matrix is defined as
The Cauchy weight is defined as
The residue is defined as
After obtaining the eigenvectors of , the iterative formula is similar to which used in the R-PCA algorithm
The 2DR-PCA algorithm is outlined in Algorithm 3.
Compared to L-PCA, in the two dimensional case we want to find a column vector to solve the following problem
In fact, is a row vector. The number of maximum absolute value in a vector contributes most to its L norm. Assume that the column index of the maximum absolute value in is , we can calculate by the th column of . The 2DL-PCA algorithm is given in Algorithm 4.
Then we can obtain a dimensional projection matrix from the following algorithm.
Apply the L-PCA procedure to to find .
4 Experimental results and analysis
Three databases: ORL, Yale and XM2VTS are used to test methods mentioned above. The recognition accuracy and running time of extracting features are recorded.
The ORL database contains face images from 40 different people and each person has 10 images, the resolution of which is 92112. Variation of expression (smile or not) and face details (wear a glass or not) are contained in the ORL database images. In the following experiments, 5 images are selected as the training samples and the rest are selected as the test samples.
The Yale database is provided by Yale University. This database contains face images from 15 different people and each has 11 images. The resolution of Yale database images is 160121. In the following experiments, 6 images are selected as the training samples and the rest are selected as the test samples.
The XM2VTS database offers synchronized video and speech data as well as image sequences allowing multiple view of the face. It contains frontal face images taken of 295 subjects at one month intervals taken over a period of few months. The resolution of XM2VTS is 5551. In the following experiments, 4 images are selected as the training samples and the rest are selected as the test samples.
4.1 R-PCA and 2DR-Pca
The experimental results of R-PCA and 2DR-PCA are shown in Table 1, and the number of iterations of R-PCA and 2DR-PCA is 120.
The initial projection matrix is obtained by PCA (2DPCA) at the beginning of R-PCA (2DR-PCA). The final projection matrix is obtained by an iterative method starting with . As a result of the iteration, the computational complexity is high. Meanwhile, they have nearly the same recognition accuracy.
In the experiment of R-PCA algorithm tested on the ORL database, the convergence process is shown in Fig. 1 (a), in which the -coordinate denotes the norm of projection matrix and the -coordinate denotes the number of iterations. The norm of a projection matrix is used to observe its convergent process. After iterating at least 100 times the projection matrix converges. As a comparison, 2DR-PCA just needs less than 30 iteration to obtain a convergent projection matrix, which is shown in Fig. 1 (b). Image matrices used in 2DR-PCA leads to a faster convergence.
The convergence illustration tested on the Yale database is shown in Fig. 2. The convergent speed of R-PCA is similar to that of 2DR-PCA. In the experiment tested on the XM2VTS database, the convergent speed of 2DR-PCA is much faster than that of R-PCA shown in Fig. 3. In other words, the efficiency of 2DR-PCA is higher than that of R-PCA.
4.2 L-PCA and 2DL-Pca
The experimental results of L-PCA and 2DL-PCA are shown in Table 2.
From Table 2 we can see that the performance of 2DL-PCA is better than that of L-PCA and PCA. In 2DL-PCA, image matrices are used directly for feature extraction. Features extracted by 2DL-PCA is less than features extracted by L-PCA.
We implement another experiment on the ORL database. Different number of features is extracted by PCA, L-PCA and 2DL-PCA, respectively. Then these features are used for face recognition. The experimental result is shown in Fig. 4, from which we can see that less features extracted by 2DL-PCA achieves a higher recognition accuracy.
In this paper we proposed 2DR-PCA and 2DL-PCA for face recognition. We extend R-PCA and L-PCA to their 2-D case so that image matrices could be directly used for feature extraction. Compared to the L norm based methods, these L norm based methods are less sensitive to outliers. We analyze the performance of 2DR-PCA and 2DL-PCA against R-PCA and L-PCA algorithms based on experiments. The experimental results show that the performance of 2DR-PCA and 2DL-PCA is better than that of R-PCA and L-PCA, respectively.
Acknowledgements.This work was partially supported by the National Natural Science Foundation of China (Grant No.61672265 and U1836218) and the 111 Project of Ministry of Education of China (Grant No. B12018).
-  (2017) KPCA method based on within-class auxiliary training samples and its application to pattern classification. Pattern Analysis and Applications 20 (3), pp. 749–767. Cited by: §1.
R1-pca: rotational invariant l1-norm principal component analysis for robust subspace factorization.
Proceedings of the 23rd international conference on Machine learning, pp. 281–288. Cited by: §1.
-  (2008) Principal component analysis based on l1-norm maximization. IEEE transactions on pattern analysis and machine intelligence 30 (9), pp. 1672–1680. Cited by: §1.
-  (2011) ECA and 2deca: entropy contribution based methods for face recognition inspired by keca. In 2011 International Conference of Soft Computing and Pattern Recognition (SoCPaR), pp. 544–549. Cited by: §1.
-  (1999) XM2VTSDB: the extended m2vts database. In Second international conference on audio and video-based biometric person authentication, Vol. 964, pp. 965–966. Cited by: §4.
-  (2005) Two-dimensional weighted pca algorithm for face recognition. In 2005 International Symposium on Computational Intelligence in Robotics and Automation, pp. 219–223. Cited by: §1.
-  (1991) Eigenfaces for recognition. Journal of cognitive neuroscience 3 (1), pp. 71–86. Cited by: §1.
-  (2004) A new direct lda (d-lda) algorithm for feature extraction in face recognition. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., Vol. 4, pp. 545–548. Cited by: §1.
-  (2006) Complete two-dimensional pca for face recognition. In 18th International Conference on Pattern Recognition (ICPR’06), Vol. 3, pp. 481–484. Cited by: §1.
-  (2004) Two-dimensional pca: a new approach to appearance-based face representation and recognition. IEEE transactions on pattern analysis and machine intelligence 26 (1), pp. 131–137. Cited by: §1.
-  (2000) Face recognition using kernel eigenfaces. In Proceedings 2000 International Conference on Image Processing (Cat. No. 00CH37101), Vol. 1, pp. 37–40. Cited by: §1.
-  (2006) Nearest neighbour line nonparametric discriminant analysis for feature extraction. Electronics Letters 42 (12), pp. 679–680. Cited by: §1.
-  (2006) A reformative kernel fisher discriminant algorithm and its application to face recognition. Neurocomputing 69 (13-15), pp. 1806–1810. Cited by: §1.