2DR1-PCA and 2DL1-PCA: two variant 2DPCA algorithms based on none L2 norm

12/23/2019 ∙ by Xing Liu, et al. ∙ 0

In this paper, two novel methods: 2DR1-PCA and 2DL1-PCA are proposed for face recognition. Compared to the traditional 2DPCA algorithm, 2DR1-PCA and 2DL1-PCA are based on the R1 norm and L1 norm, respectively. The advantage of these proposed methods is they are less sensitive to outliers. These proposed methods are tested on the ORL, YALE and XM2VTS databases and the performance of the related methods is compared experimentally.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Feature extraction by dimensionality reduction is a critical step in pattern recognition. Principal component analysis (PCA) is a classic method for dimensionality reduction in the field of face recognition, which was proposed by Turk and Pentland in Ref. Turk and Pentland [7]. Yang et al. [10] presented two-dimensional PCA (2DPCA) to improve the efficiency of feature extraction, in which image matrices were used directly. Two-dimensional weighted PCA (2DWPCA) was developed in Ref. Nhat and Lee [6] to improve the performance of 2DPCA. The complete 2DPCA method was presented in Ref. Xu et al. [9] to reduce the feature coefficients needed for face recognition compared to 2DPCA. In kernel PCA (KPCA) [11], samples were mapped into a high dimensional and linearly separable kernel space and then PCA was employed for feature extraction. Chen et al. [1] presented a pattern classification method based on PCA and KPCA (kernel principal component analysis), in which within-class auxiliary training samples were used to improve the performance. Liu et al. [4]

proposed a 2DECA method, in which features are selected in 2DPCA subspace based on the Renyi entropy contribution instead of cumulative variance contribution. Moreover, some approaches based on linear discriminant analysis (LDA) were explored 

[8, 12, 13].

Contrast to the above L norm based methods, Kwak [3] developed L-PCA by using L norm. Ding et al. [2] proposed a rotational invariant L norm PCA (R-PCA). These none L norm based algorithms are less sensitive to the presence of outliers.

In this paper we propose 2DR-PCA and 2DL-PCA algorithms for face recognition by utilizing the advantages of L

norm method and 2DPCA. Instead of using image vectors in R

-PCA and L-PCA, we use image matrices in 2DR-PCA and 2DL-PCA directly for features extraction. Compared to the 1-D methods, the corresponding 2-D methods have two main advantages: higher efficiency and recognition accuracy. We extend R-PCA and L-PCA to their two dimensional case and the 2DR-PCA and 2DL-PCA methods are proposed.

This paper is organized as follows: We give a brief introduction to the R-PCA and L-PCA algorithms in Section 2. In Section 3, the 2DR-PCA and 2DL-PCA algorithms are proposed. In Section 4, the mentioned methods are compared through experiments. Finally, conclusions are drawn in Section 5.

2 Fundamentals of subspace methods based on none L norm

In this paper, we use to denote the training set of 1-D methods, where is a -dimensional vector.

2.1 R-Pca

R-PCA algorithm tries to find a subspace by minimizing the following error function

(1)

where is the projection matrix, is defined as , and denotes the R norm, which is defined as

(2)

In R-PCA algorithm, the training set should be centered, i.e.,, where is the mean vector of , which is given by .

The principal eigenvectors of the R

-covariance matrix is the solution to R-PCA algorithm. The weighted version of R-covariance matrix is defined as

(3)

The weight has many forms of definitions. For the Cauchy robust function, the weight is

(4)

The basic idea of R-PCA is starting with an initial guess and then iterate with the following equations until convergence

(5)

The concrete algorithm is given in Algorithm 1.

0:  The training set and the subspace dimension . Then the training set is centered.
1:  Initialization: Compute standard PCA and obtain . Set .
2:  Calculate the Cauchy weight: Compute residue Compute Compute
3:  Calculate the covariance matrix:
4:  Update : orthoronalize
5:  Convergence check: If , go to Step 4. Else go to Step 6.
6:  Calculate the uncentered data: .
7:  Projection: .
7:   and .
Algorithm 1 R-PCA algorithm

2.2 L-Pca

The L norm is used in L-PCA for minimizing the following error function

(6)

where is the projection matrix, is defined as , and denotes the L norm, which is defined as

(7)

In order to obtain a subspace with the property of robust to outliers and invariant to rotations, the L norm is adopted to maximize the following equation

(8)

It is difficult to solve the multidimensional version. Instead of using projection matrix , a column vector is used in equation (8) and the following equation is obtained

(9)

Then a greedy search method is used for solving (9), which is summarized in Algorithm 2.

0:  The training set .
1:  Initialization: Initialize by random numbers. Then set
2:  Polarity check: if otherwise, .
3:  Flipping and maximization: Set .
4:  Convergence check: If , go to step 2. Else if exists such that , set , where is a small nonzero random vector. Go to step 2. Otherwise, set and stop.
4:  The projection vector .
Algorithm 2 L-PCA

One best feature is extracted by the above algorithm. In order to obtain a dimensional projection matrix instead of a vector, an algorithm based on the greedy search method is given as follows

For to

Apply the L-PCA procedure to to find

End

3 2dr-PCA and 2DL-PCA algorithms

In 2-D methods, is used to denote the training set, where is a matrix.

3.1 2dr-Pca

In this paper we propose 2DR-PCA algorithm, in which we iterate the projection matrix with an initial matrix until convergence.

First, the training set is centered, i.e., , where is the mean matrix of , defined as .

The R covariance matrix is defined as

(10)

The Cauchy weight is defined as

(11)

The residue is defined as

(12)

After obtaining the eigenvectors of , the iterative formula is similar to which used in the R-PCA algorithm

(13)

The 2DR-PCA algorithm is outlined in Algorithm 3.

0:  The training set and the subspace dimension . Then the training set is centered.
1:  Initialization: Compute standard 2DPCA and obtain . Set .
2:  Calculate the Cauchy weight: Compute residue . Compute . Compute
3:  Calculate the covariance matrix:
4:  Update : orthoronalize
5:  Convergence check: If , go to Step 4. Else go to Step 6.
6:  Calculate the uncentered data:
7:  Projection: .
7:   and .
Algorithm 3 The 2DR-PCA algorithm

3.2 2dl-Pca

Compared to L-PCA, in the two dimensional case we want to find a column vector to solve the following problem

(14)

In fact, is a row vector. The number of maximum absolute value in a vector contributes most to its L norm. Assume that the column index of the maximum absolute value in is , we can calculate by the th column of . The 2DL-PCA algorithm is given in Algorithm 4.

0:  The training set .
1:  Initialization: Pick any . set , and .
2:  Polarity check: For all , if else .
3:  Flipping and maximization: Set , and , Set .
4:  Convergence check: If , go to step 3. Else if exists such that , set , and go to step 3. Here is a small nonzero random vector. Otherwise, set and stop.
4:  The projection vector .
Algorithm 4 The 2DL-PCA algorithm

Then we can obtain a dimensional projection matrix from the following algorithm.

.

For to

.

Apply the L-PCA procedure to to find .

End

4 Experimental results and analysis

Three databases: ORL, Yale and XM2VTS are used to test methods mentioned above. The recognition accuracy and running time of extracting features are recorded.

The ORL database contains face images from 40 different people and each person has 10 images, the resolution of which is 92112. Variation of expression (smile or not) and face details (wear a glass or not) are contained in the ORL database images. In the following experiments, 5 images are selected as the training samples and the rest are selected as the test samples.

The Yale database is provided by Yale University. This database contains face images from 15 different people and each has 11 images. The resolution of Yale database images is 160121. In the following experiments, 6 images are selected as the training samples and the rest are selected as the test samples.

The XM2VTS[5] database offers synchronized video and speech data as well as image sequences allowing multiple view of the face. It contains frontal face images taken of 295 subjects at one month intervals taken over a period of few months. The resolution of XM2VTS is 5551. In the following experiments, 4 images are selected as the training samples and the rest are selected as the test samples.

4.1 R-PCA and 2DR-Pca

The experimental results of R-PCA and 2DR-PCA are shown in Table 1, and the number of iterations of R-PCA and 2DR-PCA is 120.

ORL Yale XM2VTS
Recognition accuracy
PCA 0.90 0.77 0.71
R-PCA 0.88 0.77 0.71
2DR-PCA 0.90 0.80 0.78
Running time
PCA 1.37 0.36 17.27
R-PCA 914.21 411.06 1409.30
2DR-PCA 403.90 372.76 619.78
Table 1: Experimental results of R-PCA and 2DR-PCA

The initial projection matrix is obtained by PCA (2DPCA) at the beginning of R-PCA (2DR-PCA). The final projection matrix is obtained by an iterative method starting with . As a result of the iteration, the computational complexity is high. Meanwhile, they have nearly the same recognition accuracy.

In the experiment of R-PCA algorithm tested on the ORL database, the convergence process is shown in Fig. 1 (a), in which the -coordinate denotes the norm of projection matrix and the -coordinate denotes the number of iterations. The norm of a projection matrix is used to observe its convergent process. After iterating at least 100 times the projection matrix converges. As a comparison, 2DR-PCA just needs less than 30 iteration to obtain a convergent projection matrix, which is shown in Fig. 1 (b). Image matrices used in 2DR-PCA leads to a faster convergence.

Figure 1: The convergence illustration of iterating 120 times on the ORL database. (a) R-PCA. (b) 2DR-PCA.

The convergence illustration tested on the Yale database is shown in Fig. 2. The convergent speed of R-PCA is similar to that of 2DR-PCA. In the experiment tested on the XM2VTS database, the convergent speed of 2DR-PCA is much faster than that of R-PCA shown in Fig. 3. In other words, the efficiency of 2DR-PCA is higher than that of R-PCA.

Figure 2: The convergence illustration of iterating 120 times on the Yale database. (a) R-PCA. (b) 2DR-PCA.
Figure 3: The convergence illustration of iterating 120 times on the XM2VTS database. (a) R-PCA. (b) 2DR-PCA.

4.2 L-PCA and 2DL-Pca

The experimental results of L-PCA and 2DL-PCA are shown in Table 2.

ORL Yale XM2VTS
Recognition accuracy
PCA 0.90 0.77 0.71
L-PCA 0.90 0.76 0.71
2DL-PCA 0.91 0.80 0.76
Running time
PCA 1.37 0.36 17.27
L-PCA 15.96 5.15 83.52
2DL-PCA 3.52 3.62 40.43
Table 2: Experimental results of L-PCA and 2DL-PCA

From Table 2 we can see that the performance of 2DL-PCA is better than that of L-PCA and PCA. In 2DL-PCA, image matrices are used directly for feature extraction. Features extracted by 2DL-PCA is less than features extracted by L-PCA.

We implement another experiment on the ORL database. Different number of features is extracted by PCA, L-PCA and 2DL-PCA, respectively. Then these features are used for face recognition. The experimental result is shown in Fig. 4, from which we can see that less features extracted by 2DL-PCA achieves a higher recognition accuracy.

Figure 4: Recognition accuracy versus different number of features on the ORL database.

5 Conclusions

In this paper we proposed 2DR-PCA and 2DL-PCA for face recognition. We extend R-PCA and L-PCA to their 2-D case so that image matrices could be directly used for feature extraction. Compared to the L norm based methods, these L norm based methods are less sensitive to outliers. We analyze the performance of 2DR-PCA and 2DL-PCA against R-PCA and L-PCA algorithms based on experiments. The experimental results show that the performance of 2DR-PCA and 2DL-PCA is better than that of R-PCA and L-PCA, respectively.

Acknowledgements.
This work was partially supported by the National Natural Science Foundation of China (Grant No.61672265 and U1836218) and the 111 Project of Ministry of Education of China (Grant No. B12018).

References

  • [1] S. Chen, X. Wu, and H. Yin (2017) KPCA method based on within-class auxiliary training samples and its application to pattern classification. Pattern Analysis and Applications 20 (3), pp. 749–767. Cited by: §1.
  • [2] C. Ding, D. Zhou, X. He, and H. Zha (2006) R1-pca: rotational invariant l1-norm principal component analysis for robust subspace factorization. In

    Proceedings of the 23rd international conference on Machine learning

    ,
    pp. 281–288. Cited by: §1.
  • [3] N. Kwak (2008) Principal component analysis based on l1-norm maximization. IEEE transactions on pattern analysis and machine intelligence 30 (9), pp. 1672–1680. Cited by: §1.
  • [4] X. Liu and X. Wu (2011) ECA and 2deca: entropy contribution based methods for face recognition inspired by keca. In 2011 International Conference of Soft Computing and Pattern Recognition (SoCPaR), pp. 544–549. Cited by: §1.
  • [5] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre (1999) XM2VTSDB: the extended m2vts database. In Second international conference on audio and video-based biometric person authentication, Vol. 964, pp. 965–966. Cited by: §4.
  • [6] V. D. M. Nhat and S. Lee (2005) Two-dimensional weighted pca algorithm for face recognition. In 2005 International Symposium on Computational Intelligence in Robotics and Automation, pp. 219–223. Cited by: §1.
  • [7] M. Turk and A. Pentland (1991) Eigenfaces for recognition. Journal of cognitive neuroscience 3 (1), pp. 71–86. Cited by: §1.
  • [8] W. Xiao-Jun, J. Kittler, Y. Jing-Yu, K. Messer, and W. Shitong (2004) A new direct lda (d-lda) algorithm for feature extraction in face recognition. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., Vol. 4, pp. 545–548. Cited by: §1.
  • [9] A. Xu, X. Jin, Y. Jiang, and P. Guo (2006) Complete two-dimensional pca for face recognition. In 18th International Conference on Pattern Recognition (ICPR’06), Vol. 3, pp. 481–484. Cited by: §1.
  • [10] J. Yang, D. Zhang, A. F. Frangi, and J. Yang (2004) Two-dimensional pca: a new approach to appearance-based face representation and recognition. IEEE transactions on pattern analysis and machine intelligence 26 (1), pp. 131–137. Cited by: §1.
  • [11] M. Yang, N. Ahuja, and D. Kriegman (2000) Face recognition using kernel eigenfaces. In Proceedings 2000 International Conference on Image Processing (Cat. No. 00CH37101), Vol. 1, pp. 37–40. Cited by: §1.
  • [12] Y. Zheng, J. Yang, J. Yang, X. Wu, and Z. Jin (2006) Nearest neighbour line nonparametric discriminant analysis for feature extraction. Electronics Letters 42 (12), pp. 679–680. Cited by: §1.
  • [13] Y. Zheng, J. Yang, J. Yang, and X. Wu (2006) A reformative kernel fisher discriminant algorithm and its application to face recognition. Neurocomputing 69 (13-15), pp. 1806–1810. Cited by: §1.