Advanced Variations of Two-Dimensional Principal Component Analysis for Face Recognition

12/19/2019 ∙ by Meixiang Zhao, et al. ∙ Baidu, Inc. NetEase, Inc 42

The two-dimensional principal component analysis (2DPCA) has become one of the most powerful tools of artificial intelligent algorithms. In this paper, we review 2DPCA and its variations, and propose a general ridge regression model to extract features from both row and column directions. To enhance the generalization ability of extracted features, a novel relaxed 2DPCA (R2DPCA) is proposed with a new ridge regression model. R2DPCA generates a weighting vector with utilizing the label information, and maximizes a relaxed criterion with applying an optimal algorithm to get the essential features. The R2DPCA-based approaches for face recognition and image reconstruction are also proposed and the selected principle components are weighted to enhance the role of main components. Numerical experiments on well-known standard databases indicate that R2DPCA has high generalization ability and can achieve a higher recognition rate than the state-of-the-art methods, including in the deep learning methods such as CNNs, DBNs, and DNNs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Two-Dimensional Principal Component Analysis (2DPCA) yzfy04 and its variations (e.g., zhzh05 ; lpy10 ; whwj13 ; wangj16 ; gxcdgl19 ) are playing an increasingly important role in the recently proposed deep learning frameworks such as 2DPCANet yuwu17 ; lwk19

. It is always expected that 2DPCA can extract the spacial information and the best features of 2-D samples which can improve the performance of dimensional reduction. From the view of numerical linear algebra, the principle of 2DPCA is to find a subspace (called eigenfaces or features) on which the projected samples have the largest variance. The reconstruction from such projection or extraction of lower dimension is in fact the optimal low-rank approximation of the original sample. When applying 2DPCA to face recognition, we compute the eigenfaces or features based on the training set, and fairly use them to compress the training and testing samples before classification. An implicitly natural assumption is that the projected samples from the testing set still have large variance on the computed subspace. This exactly depends on the generalization ability of 2DPCA. In this paper, we will review 2DPCA and variations, and present a new relaxed 2DPCA (R2DPCA) with perfections in three aspects: abstracting the features of matrix samples in both row and column directions, being innovatively armed with generalization ability, and weighting the main components by corresponding eigenvalues. Especially, R2DPCA utilizes the label information of training data, and not only aims to enlarge the variance of projections of training samples.

The principal component analysis (PCA) Jolliffe04 ; TP91 , has become one of the most powerful approaches of face recognition siki87 ; kisi90 ; tupe91 ; zhya99 ; pent00 . Recently, many robust PCA (RPCA) algorithms are proposed with improving the quadratic formulation, which renders PCA vulnerable to noises, into -norm on the objection function, e.g., -PCA keka05 , -PCA dzhz06 , and PCA- kwak08 . Meanwhile, sparsity is also introduced into PCA algorithms, resulting in a series of sparse PCA (SPCA) algorithms zht06 ; agjl07 ; shhu08 ; wth09 . A newly proposed robust SPCA (RSPCA) mzx12 further applies -norm both in objective and constraint functions of PCA, inheriting the merits of robustness and sparsity. Observing that -, -, and -norms are all special -norm, it is natural to impose -norm on the objection or/and constraint functions, straightforwardly; see PCA- kwak14 and generalized PCA (GPCA) lxzzl13 for instance.

To preserve the spatial structure of face images, two dimensional PCA (2DPCA), proposed by Yang et al. yzfy04 , represents face images with two dimensional matrices rather than one dimensional vectors. The computational problems bases on 2DPCA are of much smaller scale than those based on PCA, and the difficulties caused by rank defect are also avoided in general. This image-as-matrix method offers insights for improving above RSPCA, PCA-, GPCA, etc. As typical examples, the -norm-based 2DPCA (2DPCA-) lpy10 and 2DPCA- with sparsity (2DPCA-S) whwj13 are improvements of PCA- and RSPCA, respectively, and the generalized 2DPCA (G2DPCA) wangj16 imposes -norm on both objective and constraint functions of 2DPCA. Recently, the quaternion 2DPCA is proposed in jlz17 and applied to color face recognition, where the red, green and blue channels of a color image is encoded as three imaginary parts of a pure quaternion matrix. To arm the quaternion 2DPCA with the generalization ability, Zhao, Jia and Gong zjg17

proposed the sample-relaxed quaternion 2DPCA with applying the label information (if known) of training samples. The structure-preserving algorithms of quaternion eigenvalue decomposition and singular value decomposition can be found in

jwl13 ; mjb18 ; jmz17 ; jwzc18 ; jns18b ; jns18a .

Linear Discriminant Analysis (LDA) is another powerful feature extraction algorithm in pattern recognition and computer vision. Since LDA often suffers from the small sample size (3S) problem, some effective approaches have been proposed, such as PCA + LDA

bhk97 , orthogonal LDA ye05 , LDA/GSVD hopa04 , and LDA/QR yeli05 . Because of the advantages over the singularity problem and the computational cost, 2DLDA and its variants have recently attracted much attention from researchers (e.g., lls08 ; kora05 ; xsa05 ; yyfz03 ; liyu05 ; cckkl06 ). With applying the label information, the LDA-like methods are intend to compute the discriminant vectors which maximize the ratio of the between-class distance to the within-class distance.

PCA, 2DPCA and their variations are unsupervised methods, without applying the potential or known label information of samples. Their features are calculated based on the training set and thus maximize the scatter of projected training samples. The scatter of projected testing samples are not surely optimal, and certainly, so are the whole (training and testing) projected samples. Inspired by this observation, we present a new relaxation 2DPCA (R2DPCA). This approach is a generalization of G2DPCA wangj16 , and will reduce to G2DPCA if the label information is unknown or unused. Remark that the projection of R2DPCA does not aim to maximize the variance of training samples as 2DPCA, but intends to avoid the overfitting and to enhance the generalization ability. R2DPCA sufficiently utilizes the labels (if known) of training samples, and can enhance the total scatter of whole projected samples (see Example 4.3 for the indication). Different to the idea of LDA, R2DPCA aims to apply the label information to generate a weighting vector and to construct a weighted covariance matrix in the newly proposed approach of face recognition. Thus R2DPCA never suffers from the small sample size (3S) problem.

Our contributions are in three aspects. (1) We present a new ridge regression model for 2DPCA and variations by norm. Such model is general and abstracts features of face images from both row and column directions. With this model, 2DPCA and variations are combined with additional regularization on the solution to fit various real-world applications, with the great flexibility. (2) A novel relaxed 2DPCA (R2DPCA) is proposed with a new ridge regression model. R2DPCA has the stronger generalization ability than 2DPCA, 2DPCA-L1 2DPCAL1-S and G2DPCA. To the best of our knowledge, we are the first to introduce the label information into the 2DPCA-based algorithms. We also weight the selected principle components by corresponding eigenvalues to enhance the role of main components. (3) The R2DPCA-based approaches are presented for face recognition and image reconstruction, and their effectiveness is verified by applying them on practical face image databases. They are indicated to perform better than the deep learning methods such as DNNs, DBNs and CNNs in the numerical examples.

The rest of this paper is organized as follows. In Section 2, we recall 2DPCA, 2DPCA-L1 2DPCAL1-S and G2DPCA, and present a ridge regression model to gather them together. Their improved versions are also proposed. In Section 3, we present a new relaxed two dimensional principal component analysis (R2DPCA) and the optimal algorithms. We also present the R2DPCA-based approaches for face recognition and image reconstruction. In Section 4, we compare the R2DPCA with the state-to-the-art approaches, and indicate the efficiency of the R2DPCA . In Section 5, we sum up the contributions of this paper.

2 The General Ridge Regression Model of 2DPCA and Variations

The two-dimensional principle component analysis (2DPCA) has become one of the most popular and powerful methods in data sciences, especially in image recognition. Several deep learning frameworks, which rely hugely on the wonderful properties of 2DPCA, have achieved a high level performance in data analysis. This motivates us to develop a general model of 2DPCA and variations, providing a feasible way to embed them into artificial intelligence algorithms. In this section, we firstly present a ridge regression model of the improved 2DPCA, and then analyze the relationship of the state-of-the-art variations of 2DPCA.

2.1 A New Ridge Regression Model of Improved 2DPCA

The objective of 2DPCA is to find left and/or right orthonormal bases vectors so that the projected matrix samples have the largest scatter after projection. Suppose that there are training matrix samples , where and denote the height and width of images, respectively. Their mean value is . Let and gather the left and right optimal basis vectors as columns, respectively. Then the -th projected matrix sample is defined by . The improved 2DPCA seeks optimal and that minimize the scatter of the projected matrix samples. This scatter is characterized as

(1)

where

(2)

denote the covariance matrices of input samples on column and row directions, respectively. Both and are symmetric and semi-definite matrices. Since and are of full column rank, and are nonnegative. Here, represents the trace of a matrix. Thus, a new ridge regression model for the improved 2DPCA is proposed as

(3a)
(3b)

To solve the optimal problem (3), we need compute the eigenvalue problems of and . See Algorithm 2.1 for the detail.

Algorithm 2.1 (Improved 2DPCA).
Input matrix samples, , and two dimensions and . Output the left and right optimal bases and . Compute the covariance matrices of training samples on column and row directions and as in (2). Compute the largest eigenvalues of

and the corresponding eigenvectors, denoted as

. Let .
Compute the largest eigenvalues of and the corresponding eigenvectors, denoted as . Let .

2.2 Improved Variations of 2DPCA and Optimal Algorithms

Based on the idea in Section 2.1, we present the improved versions of 2DPCA yzfy04 , 2DPCA- lpy10 , 2DPCA-S whwj13 , and G2DPCA wangj16 , whose ridge regression models are proposed in the forms of computing the first projection vector.

Without loss of generality, we assume that the training samples are mean-centered, i.e., ; otherwise, we will replace by . After obtaining first left and right projection vectors and , the -th left and right projection vectors and can be calculated similarly on deflated samples:

(4)

where . The ridge regression models of improved 2DPCA and variations find the first left and right projection vectors and by solving the optimization problem with equality constraints as follows.

  • The improved 2DPCA:

    (5)
  • The improved 2DPCA-:

    (6)
  • The improved 2DPCA-S:

    (7)

    where is a positive constant.

  • The improved G2DPCA:

    (8)

    where and .

Since two independent variables in models (5)-(8) are separated, it is appropriate to solve and separately by optimal algorithms. Taking the improved G2DPCA (8) for instance, the first projection vector w ( or ) is computed by solving the optimization problem with equality constraints:

(9)

where or , and . Depending on the value , the projection vector w can be updated in two different ways. If ,

(10a)
(10b)

where satisfies , denotes the Hadamard product, i.e., the element-wise product between two vectors. If ,

(11a)
(11b)

Notice that if the terms containing variable are omitted, the optimal models (5)-(8) are exactly the ridge regression models of well known 2DPCA yzfy04 , 2DPCA- lpy10 , 2DPCA-S whwj13 , and G2DPCA wangj16 algorithms.

3 Relaxed Two-Dimensional Principal Component Analysis

2DPCA is an unsupervised methods and overlooks the potential or known label information of samples. The abstracted features maximize the scatter of projected training samples, and are implicitly expected to maximize (not surely) the scatter of projected testing samples as well. In this section, we present a new relaxed two-dimensional principal component analysis (R2DPCA) method by -norm to avoid the overfitting and to enhance the generalization ability. In large amount of experiments, R2DPCA sufficiently utilizes the labels (if known) of training samples, and can enhance the total scatter of whole projected samples. Interestingly, R2DPCA never suffers from the small sample size (3S) problem as supervised method such as LDA. Now we introduce the R2DPCA from two parts: weighting vector and objective function relaxation.

3.1 Weighting vector

Suppose that training samples can be partitioned into classes and each class contains samples:

(12)

where denotes the -th sample of the -th class, , . Define the mean of training samples from the -th class as and the -th within-class covariance matrix of the training set as where , and . The within-class covariance matrix is a symmetric and positive semi-definite matrix. Its maximal eigenvalue, denoted by , represents the variance of training samples in the principal component. The larger is, the better scattered the training samples of -th class are. If then all of training samples from the -th class are same, and then the contribution of the -th class to the covariance matrix of training set should be reduced. To this aim, we define a weighting vector of training classes,

(13)

where is a weighting factor of the -th class with a function, . The computation of the weighting vector is proposed in Algorithm 3.2.

Algorithm 3.2 (Weighting Vector).
Input training samples as in (12), the number of classes , the number of samples in each class contains and the dimension . Output the weighting vector .   function   for  do                  for  do                end for            Compute relaxation vector defined as in (13)   end for

3.2 Objective function relaxation

With the computed weighting vector in hand, we define a relaxed criterion as

(14)

where is a relaxation parameter, and are unit vectors under norm,

(15a)
(15b)

The R2DPCA finds its first projection vectors and by solving the optimization problem with equality constraints:

(16)

where the criterion is defined as in (14). Notice that the relaxed criterion (16) reduces to (8) if , and thus, the first projection vectors of R2DPCA and G2DPCA are the same. If first projection vectors and have been obtained, the -th projection vectors and can be calculated similarly on the deflated samples, defined as in (4). At each iterative step, we also obtain the maximal value of the objective function,

Exactly, the first optimal projection vectors of R2DPCA solve the optimal problem with equality constraints:

(17)

Algorithm 3.3 is presented to compute first optimal left and right projection vectors.

Now we present the relationships among the improved 2DPCA, 2DPCA, 2DPCA-S , G2DPCA, and R2DPCA. It is obvious that 2DPCA and 2DPCA- are two special cases of G2DPCA. 2DPCA-S originates from G2DPCA with and which leads to projection vector with only one nonzero element. Then the -norm constraint is employed to fix this problem, resulting in 2DPCA-S. On the other hand, G2DPCA with and behaves like 2DPCA-S, since the -norm constraint in G2DPCA behaves like the mixed-norm constraint in 2DPCA-S. R2DPCA is a generalization of G2DPCA.

Algorithm 3.3 (Relaxed Two-Dimensional Principal Component Analysis (R2DPCA)).
Input training samples as in (12) and parameters ,   (1) Computing the weighting vector, , by Algorithm 3.2.   (2) Compute the covariance matrix and the relaxed covariance matrix as in (15).   (3) Compute the left features , the right features , and the variances , according to the relaxed criterion (14).   for  do      Initialize , , arbitrary with .       according to (14).      while  do           is computed in the following four cases. is computed by the similar way.                          .           Case 1: .             Case 2: .             Case 3: .             Case 4: .             according to (14).                          end while               end for

3.3 Face recognition

Suppose we have computed the optimal projections, and , and the diagonal matrix by R2DPCA. The R2DPCA approach for color face recognition is proposed in Algorithm 3.4.

Algorithm 3.4 (R2DPCA approach for face recognition).
Input the training set, , the optimal projections, and , and the set of face images to be recognized, . Output the identity vector of , . Compute the features of training face images under and as
Compute the feature of each face images in , . Solve the optimal problems
for .

3.4 Image reconstruction

The original digit image, , can be optimally approximated by a low-rank reconstruction from its feature, . Suppose that and are the unitary complement of  and . For , the reconstructions are defined as

(18)

with . Here, the mean value of samples is assumed to be zero for simplicity. The image reconstruction rate of is defined as follows

(19)

Note that is always a good approximation of . If and , and are a unitary matrices and hence , which means .

4 Experiments

In this section, we present numerical experiments to compare all advanced variations of 2DPCA, including in the relaxed two-dimensional principle component analysis (R2DPCA), with the state-of-the-art algorithms. The numerical experiments are performed with MATLAB-R2016 on a personal computer with Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.4GHz (dual processor) and RAM 32GB.

Example 4.1.

In this experiment, we compare R2DPCA with 2DPCA, 2DPCA-, 2DPCA-S, and G2DPCA on face recognition by utilizing three famous databases as follows:

  • Faces95 database111Collection of Facial Images: Faces95. http://cswww.essex. ac.uk/mv/allf aces/faces95.html (1440 images from 72 subjects, twenty images per subject),

  • Color FERET database222The color Face Recognition Technology (FERET) database: https://www.nist.gov/itl/iad/image-group/color-feret-database. (3025 images from 275 subjects, eleven images per subject),

  • Grey FERET database333Here we use the widely used cropped version of the FERET database. The size of each face image is . (1400 images from 200 subjects, seven images per subject).

All of face images are cropped and resized such that each image is of 8080 size. The basic setting is that and face images of each person from Faces95 and (color or grey) FERET face databases are randomly selected out as training samples, and the remaining ones are left for testing.

We test the effect of numbers of chosen features on the recognition accuracy. Let and be fixed as the optimal parameters in Table 1. The recognition accuracies of 2DPCA, 2DPCA-L1, 2DPCAL1-S, G2DPCA and R2DPCA with different feature numbers in the range of on the grey Feret databases are shown in Figure 1. The recognition accuracies of G2DPCA and R2DPCA on the Faces95 and Color Feret databases are shown in Figure 2.

From the numerical results in Table 1 and Figures 1 and 2, we can conclude that the classification accuracies of R2DPCA are higher and more stable than 2DPCA, 2DPCA-L1, 2DPCAL1-S and G2DPCA when the number of chosen features is large. The recognition accuracies of G2DPCA and R2DPCA are the same when , in which case neither relaxation nor weighting is necessary in G2DPCA.

Algorithms Face95 color Feret
2DPCA
2DPCA-
2DPCA-S
G2DPCA
R2DPCA 0.9493 0.7085
Table 1: Recognition accuracies of five algorithms
Figure 1: Recognition accuracies of 2DPCA, 2DPCA-L1, 2DPCAL1-S, G2DPCA and R2DPCA with on the grey FETET database.
Figure 2: Recognition accuracies of R2DPCA and G2DPCA with on the Faces95 (top) and Color FERET (below) databases.
Example 4.2.

In this experiment, we compare R2DPCA with three most prominent deep learning primitives: Convolutional Neural Networks (CNNs), Deep Belief Networks (DBNs) and Deep Neural Networks (DNNs). These methods are applied on the partial MNIST database of handwritten digits, which has a training set of

samples, and a test set of samples. The size of each image is pixels. The codes of CNNs, DBNs and DNNs are according to palm12 and nielsen13 , and the settings are as follows.

Deep Neural Networks (DNNs) are implemented by stacking layers of neural networks along the depth and width of smaller architectures. A four-layer neural network is used in our tests. The input layer of the network contains neurons and the output layer contains neurons. The number of neurons in first and second hidden layers are set by and , where increases from to and is fixed. All weights and biases are initialized randomly between and will be updated by error back propagation algorithm. The iteration will stop once the convergence condition is achieved. In our test, this condition is that if current accuracy of test samples is lower than last iteration more than three times.

Deep Belief Networks (DBNs) consist of a number of layers of Restricted Boltzmann Machines (RBMs) which are trained in a greedy layer wise fashion. The lower layer is same as the input layer in DNNs, and the top layer as the hidden layer. In our experiment, a four layers consisted of two RBMs are constructed. We set the number of hidden neurons in first RBM from

to , step and a fixed number of hidden neurons of

in second RBM. Each RBM is trained in a layer-wise greedy manner with contrastive divergence. All weights and biases are initialized to be zero. Each RBM is trained on the full

images training set, using mini-batches of size , with a fixed learning rate of

for one epoch. One epoch is one full sweep of the data. Having trained the first RBM the entire training dataset is transformed through the first RBM resulting in a new

by dataset which the second RBM is trained on. Then the trained weights and biases are used to initialize a feed-forward neural net with layers of sizes

, the last 10 neurons being the output label units. The feed-forward neural net is trained with sigmoid activation function using backpropagation. Here we set the mini-batches of size

for one epoch using a fixed learning rate of . At last the test samples are performed in the feed-forward network and the maximum output unit are their labels.

Convolutional Neural Networks (CNNs) are feed-forward, back-propagate neural networks with a special architecture inspired from the visual system, consisting of alternating layers of convolution layers and sub-sampling layers. What is different is that CNNs work on the two dimensional data directly. In our experiment, we set two convolution layers and two sub-sampling layers. The first layer has k feature maps, where we set from to , step , connected to the single input layer through kernels. The second layer is a mean-pooling layer. The third layer has

feature maps which are all connected to all k mean-pooling layers below through

kernels. The fourth layer is still a mean-pooling layer. After above steps the feature maps is concatenated into a feature vector which feeds into the final layer which consists of output neurons, corresponding to the

class labels. The CNNs are trained with stochastic gradient descent on the training set, using mini-batches of size

, with a fixed learning rate of for one epoch. Putting test samples in the trained networks and comparing output with their true labels in order are to get the recognition rate.

Figure 3: Recognition accuracies on the MINIST database.

The numerical results are shown in Figure 3. We can see that R2DPCA has the better performance over CNNs, DBNs and DNNs in the recognition accuracies. It should be noticed that the recognition rates of CNNs, DBNs and DNNs can not achieve at high levels with small samples, but will increase when the amount of training samples become larger.

Example 4.3.

In this experiment, we indicate the generalization ability of R2DPCA. Let randomly generated points be equally separated into two classes (denoted as and , respectively). points are chosen from each class as training samples (denoted as magenta and ) and the rest as testing samples (denoted as blue and ). The principle component of training points is computed by 2DPCA and R2DPCA. In three random cases, the computed principle components by two methods are plotted with the black lines, and the weighting vectors of R2DPCA are , and . The variances (the larger the better) of the training set and the whole points, under the projection of 2DPCA and R2DPCA, are shown in Table 2.

Variance of training points Variance of testing points Variance of the whole points
2DPCA R2DPCA 2DPCA R2DPCA 2DPCA R2DPCA
Table 2: Variances in three random cases
Figure 4: 2DQPCA (Top) and R2DPCA(Bellow) in random case with : randomly generated points are equally separated into two classes (denoted as and , respectively). points from each class are chosen as training samples (denoted as magenta and ) and the rest as testing samples (denoted as blue and ). The principle components of training points are plotted with the blue lines.

5 Conclusion

This paper is a survey of recent development of 2DPCA. We present a general ridge regression model for 2DPCA and variations by norm, with the improvement on feature extraction from both row and column directions. To enhance the generalization ability, the relaxed 2DPCA (R2DPCA) is proposed with a general ridge regression model. The R2DPCA is a generalization of 2DPCA, 2DPCA- and G2DPCA, and has higher generalization ability. Since utilizing the label information, the R2DPCA can be seen as a new supervised projection method, but it is totally different to the two-dimensional linear discriminant analysis (2DLDA)ye05 ; lls08 . The R2DPCA-based approaches for face recognition and image reconstruction are also proposed and the selected principle components are weighted to enhance the role of main components. The properties and the effectiveness of proposed methods are verified by practical face image databases. In numerical experiments, R2DPCA has a better performance than 2DPCA, 2DPCA-,G2DPCA, CNNs, DBNs, and DNNs.

Acknowledgments

This paper is supported in part by National Natural Science Foundation of China under grants 11771188 and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.

References

References

  • (1) J. Yang, D. Zhang, A. F. Frangi, J. Y. Yang (2004) Two-dimensional PCA: A new approach to appearance-based face representation and recognition, IEEE Trans. Pattern Anal. Mach. Intell., 26 (1), pp. 131-137.
  • (2) Daoqiang Zhang and Zhi-Hua Zhou (2005), (2D)PCA: Two-directional two-dimensional PCA for efficient face representation and recognition, Neurocomputing, 69(1-3), pp. 224-231.
  • (3) X. Li, Y. Pang, and Y. Yuan (2010) -norm-based 2DPCA, IEEE Trans. Syst., Man, Cybern. B, Cybern., 40 (4), pp. 1170-1175.
  • (4) H. Wang and J. Wang (2013) 2DPCA with -norm for simultaneously robust and sparse modelling, Neural Netw., 46, pp. 190-198.
  • (5) J. Wang (2016) Generalized 2-D Principal Component Analysis by -Norm for Image Analysis, IEEE Trans. Cybern., 46 (3), pp. 792-803.
  • (6) Q. Gao et al. (2019) -2-DPCA and face recognition, IEEE Trans. Cybern., 49(4), pp. 1212-1223.
  • (7) D. Yu, X.-J. Wu (2017) 2DPCANet: a deep leaning network for face recognition, Multimed. Tools Appl., 4, pp. 1-16.
  • (8) Y.-K. Li, X.-J. Wu, and J. Kittler (2019) L1-2D2PCANet: a deep learning network for face recognition, J. of Electronic Imaging, 28(2), pp. 023016 (20 March 2019). https://doi.org/10.1117/1.JEI.28.2.023016
  • (9) I. Jolliffe (2004) Principal Component Analysis, New York, NY, USA: Springer.
  • (10) M. Turk, A. Pentland (1991) Eigenfaces for recognition, J. Cogn. Neurosci., 3 (1), pp. 71-86.
  • (11) L. Sirovich, M. Kirby (1987) Low-dimensional procedure for characterization of human faces, J. Optical Soc. Am. 4, pp. 519-524.
  • (12) M. Kirby, L. Sirovich (1990) Application of the karhunenloeve procedure for the characterization of human faces, IEEE Trans. Pattern Anal. Mach. Intell., 12 (1), pp. 103-108.
  • (13) M. Turk, A. Pentland (1991) Eigenfaces for recognition. J. Cognitive Neurosci, 3(1), pp. 71-76.
  • (14) L. Zhao, Y. Yang (1999) Theoretical analysis of illumination in PCA-based vision systems, Pattern Recogn., 32(4), pp. 547-564.
  • (15) A. Pentland (2000) Looking at people: sensing for ubiquitous and wearable computing, IEEE Trans. Pattern Anal. Mach. Intell., 22 (1), pp. 107-119.
  • (16) Q. Ke and T. Kanade (2005) Robust norm factorization in the presence of outliters and missing data by alternative convex programming, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 1, San Diego, CA, USA, pp. 739-746.
  • (17) C. Ding, D. Zhou, X. He, and H. Zha (2006) -PCA: Rotational invariant -norm principal component analysis for robust subspace factorization, Proc. 23rd Int. Conf. Mach. Learn., Pittsburgh, PA, USA, pp. 281-288.
  • (18) N. Kwak (2008) Principal component analysis based on -norm maximization, IEEE Trans. Pattern Anal. Mach. Intell., 30 (9), pp. 1672-1680.
  • (19) H. Zou, T. Hastie, and R. Tibshirani (2006) Sparse principal component analysis, J. Comput. Graph. Stat., 15 (2), pp. 265-286.
  • (20) A. d’Aspremont, L. EI Ghaoui, M. I. Jordan, and G. R. Lanckriet (2007) A direct formulation for sparse PCA using semidefinite programming, SIAM Rev., 49 (3), pp. 434-448.
  • (21) H. Shen and J. Z. Huang (2008) Sparse principal component analysis via regularized low rank matrix approximation, J. Multivar. Anal., 99 (6), pp. 1015-1034.
  • (22) D. M. Witten, R. Tibshirani, and T. Hastie (2009) A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis, Biostatistics, 10 (3), pp. 515-534.
  • (23) D. Meng, Q. Zhao, and Z. Xu (2012) Improve robustness of sparse PCA by -norm maximization, Pattern Recognit., 45 (1), pp. 487-497.
  • (24) N. Kwak (2014) Principal component analysis by -norm maximization, IEEE Trans. Cybern., 44 (5), pp. 594-609.
  • (25) Z. Liang, S. Xia, Y. Zhou, L. Zhang, and Y. Li (2013) Feature extraction based on -norm generalized principal component analysis, Pattern Recognit. Lett., 34 (9), pp. 1037-1045.
  • (26) Z. Jia, S. Ling, M. Zhao (2017) Color two-dimensional principal component analysis for face recognition based on quaternion model, LNCS, vol. 10361, pp. 177-189.
  • (27) M. Zhao, Z. Jia, D. Gong (2018) Sample-relaxed two-dimensional color principal component analysis for face recognition and image reconstruction, arXiv.org/cs /arXiv:1803.03837v1, 10 Mar 2018.
  • (28) Z. Jia, M. Wei, S. Ling (2013) A new structure-preserving method for quaternion Hermitian eigenvalue problems, J. Comput. Appl. Math. 239, pp. 12-24.
  • (29) R. Ma, Z. Jia, Z. Bai (2018) A structure-preserving Jacobi algorithm for quaternion Hermitian eigenvalue problems, Comput. Math. Appl., 75(3), pp. 809-820.
  • (30) Z. Jia, R. Ma, M. Zhao (2017) A New Structure-Preserving Method for Recognition of Color Face Images, Computer Science and Artificial Intelligence , pp. 427-432.
  • (31) Z. Jia, M. Wei, M. Zhao, Y. Chen (2018) A new real structure-preserving quaternion QR algorithm, J. Comput. Appl. Math. 343, pp. 26-48.
  • (32) Z. Jia, M.K. Ng, G. Song (2018) Lanczos method for large-scale quaternion singular value decomposition, Numer. Algorithms, 08 November 2018. https://doi.org/10.1007/s11075-018-0621-0
  • (33)

    Z. Jia, M.K. Ng, and G. Song (2019) Robust Quaternion Matrix Completion with Applications to Image Inpainting, Numer. Linear Algebra Appl., DOI:10.1002/nla.2245.

    http://www.math.hkbu.edu.hk/ mng/quaternion.html
  • (34) L. Mackey (2008) Deflation methods for sparse PCA, Proc. Adv. Neural Inf. Process. Syst., 21, Whistler, BC, Canada., pp. 1017-1024.
  • (35) P.N. Belhumeur, J. Hespanda, D. Kriegeman (1997) Eigenfaces vs Fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Machine Intell. 19(7), pp. 711-720.
  • (36)

    J. Ye (2005) Characterization of a family of algorithms for generalized discriminant analysis on undersampled problems, Machine Learning Res., 6, pp. 1532-4435.

  • (37) P. Howland, H. Park (2004) Generalized discriminant analysis using the generalized singular value decomposition, IEEE Trans. Pattern Anal. Machine Intell. 8, pp. 995-1006.
  • (38)

    J. Ye, Q. Li (2005) A two-stage discriminant analysis via QR decomposition, IEEE Trans. Pattern Anal. Machine Intell., 27(6), pp. 929-941.

  • (39) Z.Z. Liang, Y.F. Li, P.F. Shi (2008) A note on two-dimensional linear discriminant analysis, Pattern Recognit. Lett. 29, pp. 2122-2128.
  • (40) S. Kongsontana, Y. Rangsanseri (2005) Face recognition using 2DLDA algorithm, In: Proc. 8th Internat. Symp. Signal Process. Appl., pp. 675-678.
  • (41) H. Xiong, M.N.S. Swamy, M.O. Ahmad (2005) Two-dimensional FLD for face recognition. Pattern Recognition 38 (7), 1121-1124.
  • (42) J. Yang, J.Y. Yang, A.F. Frangi, D. Zhang (2003) Uncorrelated projection discriminant analysis and its application to face image feature extraction, Internat. J. Pattern Recognition Artificial Intell. 17 (8), pp. 1325-1347.
  • (43) M. Li, B. Yuan (2005) 2D-LDA: A novel statistical linear discriminant analysis for image matrix. Pattern Recognition Lett. 26 (55), pp. 527-532.
  • (44) D. Cho, U. Chang, K. Kim, B. Kim, S. Lee (2006) (2D)2DLDA for efficient face recognition. LNCS 4319, pp. 314-321.
  • (45) R.B.Palm. Prediction as a candidate for learning deep hierarchical models of data.Technical University of Denmark, 2012.
  • (46) M. Nielsen. Neural Networks and Deep Learning[online]. 2013. http://neuralnetworksanddeeplearning.com