1 Introduction
In this paper, we present a samplerelaxed color twodimensional principal component analysis (SR2DCPCA) approach for face recognition and image reconstruction based on quaternion models. Different from 2DPCA [23] and 2DCPCA [8], SR2DCPCA utilizes the variances of training samples with the same label, aims to maximize the variance of the whole (training and testing) projected images, and has comparable feasibility and effectiveness with stateoftheart methods.
Color information is one of the most important characteristics in reflecting structural information of a face image. It can help human being to accurately identify, segment or watermark color images (see [1], [3][7], [13, 17, 20, 21, 24, 25] for example). Various traditional methods of (grey) face recognition has been improved by fully exploiting color cues. Torres et al. [18] applied the traditional PCA into R, G and B color channels, respectively, and got a fusion of the recognition results from three color channels. Xiang, Yang and Chen [22]
proposed a CPCA approach for color face recognition. They utilized a color image matrixrepresentation model based on the framework of PCA and applied 2DPCA to compute the optimal projection for feature extraction. Recently, the quaternion PCA (QPCA)
[2], the twodimensional QPCA (2DQPCA) and bidirectional 2DQPCA [16], and kernel QPCA (KQPCA) and twodimensional KQPCA [3], have been proposed to generalize the conventional PCA and 2DPCA to color images, using the quaternion representation. These approaches have achieved a significant success in promoting the robustness and the ratio of face recognition by utilizing color information.The PCAlike methods for face recognition are based on linear dimensionreducing projection. The projection directions are exactly orthonormal eigenvectors of the covariance matrix of training samples, with the aim to maximize the total scatter of the projected training images. Sirovich and Kirby [14, 10] applied the PCA approach to the representation of (grey) face images, and asserted that any face image can be approximately reconstructed by a facial basis and a mean face image. Based on this assertion, Turk and Pentland [19] proposed a wellknown eigenface method for face recognition. Following that, many properties of PCA have been studied and PCA has gradually become one of the most powerful approaches for face recognition [12]. As a breakthrough development, Yang et al. [23] proposed the 2DPCA approach, which constructs the covariance matrix by directly using 2D face image matrices. Generally, the covariance matrix in 2DPCA is of a smaller dimension than that in PCA, which reduces storage and the computational operations. Moreover, the 2DPCA approach is convenient to apply spacial information of face images, and achieves a higher face recognition rate than PCA in most cases.
With retaining the advantages of 2DPCA, 2DCPCA recently proposed in [8] is based on quaternion matrices rather than quaternion vectors, and hence can fully utilize color information and the spacial structure of face images. In this method, the generated covariance matrix is Hermitian and lowdimensional. The projection directions are eigenvectors of the covariance matrix corresponding to the largest eigenvalues. We find that 2DCPCA ignores label information (if provided) and scatter of training samples with the same label. If these information is considered when constructing a covariance matrix, the recognition rate of 2DCPCA will be improved. As far as our knowledge, there has been no approaches of face recognition based on the framework of 2DCPCA using these information.
The paper is organized as follows. In section 2, we recall twodimensional color principle component analysis approach proposed in [8]. In section 3, we present a samplerelaxed twodimensional color principal component analysis approach based on quaternion models. In section 4, we provide the theory of the new approach. In section 5, numerical experiments are conduct by applying the Georgia Tech face database and the color FERET face database. Finally, the conclusion is draw in section 6.
2 2dcpca
We firstly recall the twodimensional color principle component analysis (2DCPCA) from [8].
Suppose that an quaternion matrix is in the form of , where are three imaginary values satisfying
(2.1) 
and , . A pure quaternion matrix is a matrix whose elements are pure quaternions or zero. In the RGB color space, a pixel can be represented with a pure quaternion, , where stand for the values of Red, Green and Blue components, respectively. An color image can be saved as an pure quaternion matrix, , in which an element, , denotes one color pixel, and , and are nonnegative integers [11].
Suppose that there are training color image samples in total, denoted as pure quaternion matrices, , and the mean image of all the training color images can be defined as follows
(2.2) 
Model 1 (2dcpca).
Compute the following color image covariance matrix of training samples2DCPCA based on quaternion models can preserve color and spatial information of face images, and its computational complexity of quaternion operations is similar to the computational complexity of real operations of 2DPCA (proposed in [23]).
3 Samplerelaxed 2DCPCA
In this section, we present a samplerelaxed 2DCPCA approach based on quaternion models for face recognition.
Suppose that all the color image samples of the training set can be partitioned into classes with each containing samples:
where represents the th sample of the th class. Now, we define a relaxation vector using label and variance within a class, and generate a samplerelaxed covariance matrix of training set in the following.
3.1 The relaxation vector
Define the mean image of training samples from the th class by
and the th withinclass covariance matrix by
(3.1) 
where and .
The withinclass covariance matrix, , is a Hermitian quaternion matrix, and is semidefinite positive, with its eigenvalues being nonnegative. The maximal eigenvalue of , denoted as , represents the variance of training samples, , in the principal component. Generally, the larger is, the better scattered the training samples of the th class are. If , all the training samples in the th class are same, and the contribution of the th class to the covariance matrix of the training set should be controlled by a small relaxation factor.
Now, we define a relaxation vector for the training classes.
DEfinition 3.1.
Suppose that the training set has classes and the covariance matrix of the th class is . Then the relaxation vector can be defined as
(3.2) 
where
is the relaxation factor of the th class.
If the th class contains training samples, the relaxation factor of each sample can be calculated as . If each training class has only one sample, i.e., , all the withinclass covariance matrices will be zero matrices, and
In this case, the relaxation factor of each class will be , and so will its unique sample.
3.2 The relaxed covariance matrix and eigenface subspace
Now, we define the relaxed covariance matrix of training set.
DEfinition 3.2.
If there is no label information, or people are unwilling to use such information, the training set can be regarded as containing classes, with each having one sample, i.e., and . In this case, the relaxed covariance matrix, , defined by (3.3) is exactly the covariance matrix, , defined by (2.3).
DEfinition 3.3.
Suppose that is the relaxed covariance matrix of training set. The generalized total scatter criterion can be defined as follows
(3.4) 
where is a quaternion matrix with unitary column vectors.
Note that is real and nonnegative since the quaternion matrix, , is Hermitian and positive semidefinite. Our aim is to select the orthogonal projection axes, , to maximize the criterion, , i.e.,
(3.5) 
These optimal projection axes are in fact the orthonormal eigenvectors of corresponding to the largest eigenvalues.
Once the relaxed covariance matrix, , is built, we can compute its largest (positive) eigenvalues and their corresponding eigenvectors (called eigenfaces), denoted as . Let the optimal projection be
(3.6) 
and the diagonal matrix be
(3.7) 
Then and . Following that, for any quaternion matrix norm , we define the norm as follows
3.3 Color face recognition
In this section, we apply the optimal projection, , and the positive definite matrix, , to color face recognition.
The projections of training color face images in the subspace, , are
(3.8) 
where is defined by (2.2). The columns of , , , are called the principal component (vectors), and is called the feature matrix or feature image of the sample image, . Each principal component
of the samplerelaxed 2DCPCA is a quaternion vector. With the feature matrices in hand, we use a nearest neighbour classifier for color face recognition.
Now, we present a samplerelaxed twodimensional color principal component analysis (SR2DCPCA) approach.
Model 2 (Sr2dcpca).
Compute the relaxed color image covariance matrix, , of training samples by (3.3) Compute the largest eigenvalues of and their corresponding eigenvectors (called eigenfaces), denoted as . Let the eigenface subspace be and the weighted matrix be . Compute the projections of training color face images in the subspace, ,SR2DCPCA can preserve color and spatial information of color face images as 2DCPCA (proposed in [8]). Compared to 2DCPCA, SR2DCPCA has an additional computation amount on calculating the relaxation vector, and generally provides a better discriminant for classification. The aim of the projection of SR2DCPCA is to maximize the variance of the whole samples, while that of 2DCPCA is to maximize the variance of training samples. Note that the eigenvalue problems of Hermitian quaternion matrices in Model 2 can be solved by the fast structurepreserving algorithm proposed in [9].
Now we provide a toy example.
Example 3.1.
As shown in Figure 3.1, randomly generated points are equally separated into two classes (denoted as and , respectively). We choose 100 points from each class as training samples (denoted as magenta and ) and the rest as testing samples (denoted as blue and ). The principle component of training points is computed by 2DCPCA and SR2DCPCA. In three random cases, the relaxation vectors are , and ; the computed principle components are plotted with the blue lines. The variances of the training set and the whole points, under the projection of 2DCPCA and SR2DCPCA, are shown in the following table.
Case  Variance of training points  Variance of the whole points  

2DCPCA  SR2DCPCA  2DCPCA  SR2DCPCA  
3.4 Color image reconstruction
Now, we consider color image reconstruction based on SR2DCPCA. Without loss of generality, we suppose that . In this case, the projected color image is .
THeorem 3.1.
Suppose that is the unitary complement of such that is a unitary quaternion matrix. Then the reconstruction of a training sample, , is , and
(3.9) 
Proof.
We only need to prove (3.9). Since quaternion matrix is unitary,
where is an identity matrix. The projected image, , of the sample image, , satisfies that
Therefore, we have
∎
According to [8], the image reconstruction rate of can be defined as follows
(3.10) 
Since is generated based on eigenvectors corresponding to the largest eigenvalues of , is always a good approximation of the color image, . If the number of chosen principal components ,
is a unitary matrix and
, which means .From the above analysis, SR2DCPCA is convenient to reconstruct color face images from the projections, as 2DCPCA proposed in [8].
4 Variance updating
In this section we analysis the contribution of training samples from each class to the variance of the projected samples.
For the th class with training samples, let
According to the definition of the relaxed covariance matrix (3.3), we can rewrite as
(4.1) 
Denote all the eigenvalues of an by Hermitian quaternion matrix, , as in descending order.
LEmma 4.1.
Suppose that , where is Hermitian, and . Then
Proof.
The properties for the eigenvalues of Hermitian (complex) matrices [15, Theorem 3.8, page 42] are still right for Hermitian quaternion matrices. Suppose that are Hermitian matrices, then
(4.2) 
for . Consequently,
(4.3) 
Since
is Hermitian and semidefinite positive, its maximal singular value,
, is exactly its maximal eigenvalue, which is the multiplication of the maximal singular values of and . That is . From equation (4.3), we can obtain that∎
Since Hermitian quaternion matrices, , and are semidefinite positive, we obtain that
Let the variance of projections of all the training samples on be
The following theorem characterizes the contribution of training samples from one class to the variance of all the projected training samples.
THeorem 4.2.
With the above notations and a fixed relaxation vector , the variance of projections of all the training samples satisfies that
5 Experiments
In this section we compare the efficiencies of five approaches on color face recognition:

PCA: the Principle Component Analysis (with converting Colorimage into Grayscale),

CPCA: the Color Principle Component Analysis proposed in [22],

2DPCA: the TwoDimensional Principle Component Analysis proposed in [23] (with converting Colorimage into Grayscale),

2DCPCA: the TwoDimensional Color Principle Component Analysis proposed in [8],

SR2DCPCA: the SampleRelaxed TwoDimensional Color Principle Component Analysis proposed in Section 3.
All the numerical experiments are performed with MATLABR2016 on a personal computer with Intel(R) Xeon(R) CPU E52630 v3 @ 2.4GHz (dual processor) and RAM 32GB.
Example 5.1.
In this experiment, we compare SR2DCPCA with PCA, CPCA, 2DPCA and 2DCPCA using the famous Georgia Tech face database [26]. The database contains various pose faces with various expressions on cluttered backgrounds. All the images are manually cropped, and then resized to pixels. Some cropped images are shown in Fig. 5.1.
The first ( or ) images of each individual are chosen as the training set and the remaining as the testing set. The numbers of the chosen eigenfaces are from to . The face recognition rates of five PCAlike methods are shown in Fig. 5.2. The top and the below figures show the results for cases that the first and face images per individual are chosen for training, respectively.
The maximal recognition rate (MR) and the CPU time for recognizing one face image in average are listed in Table 1. We can see that SR2DCPCA reaches the highest face recognition rate, and costs less CPU time than CPCA.
CPCA  2DPCA  2DCPCA  SR2DCPCA  

MR  CPU  MR  CPU  MR  CPU  MR  CPU  
7.58  0.02  0.09  0.85  
23.80  0.06  0.30  2.16 
Example 5.2.
In this experiment, we compare three twodimensional approaches: 2DPCA, 2DCPCA and SR2DCPCA on face recognition and image reconstruction, by using the color Face Recognition Technology (FERET) database [27]. The database (version 2, DVD2, thumbnails) contains 269 persons, 3528 color face images, and each person has various numbers of face images with various backgrounds. The minimal number of face images for one person is 6, and the maximal one is 44. The size of each cropped color face image is pixels, and some samples are shown in Fig. 5.3.
Taking the first 6 images of each individual as an example, we randomly choose image/images as the training set and the remaining as the testing set. This process is repeated 5 times, and the average value is output. For a fixed , we consider ten cases that the number of the chosen eigenfaces increases from to . The average face recognition rate of these ten cases is shown in the top of Fig. 5.4. When , the face recognition rate with the number of eigenfaces increasing from 1 to 10 is shown in the below of Fig. 5.4.
With , we also employ 2DCPCA and SR2DCPCA to reconstruct the color face images () in training set from their projections (). In Fig. 5.5, the reconstruction of the first four persons with 2, 10, 20, 38 and 128 eigenfaces: the top and the below pictures are reconstructed by 2DCPCA and SR2DCPCA, respectively. In Fig. 5.6, we plot the ratios of image reconstruction by 2DCPCA (top) and SR2DCPCA (below) with the number of eigenfaces changing from 1 to 128. The difference in the ratio between the two method is shown in Fig. 5.7 (the ratio of SR2DCPCA subtracts that of 2DCPCA). These results indicate that both 2DCPCA and SR2DCPCA are convenient to reconstruct color face images from projections, and can reconstruct original color face images with choosing all the eigenvectors to span the eigenface subspace.
6 Conclusion
In this paper, SR2DCPCA is presented for color face recognition and image reconstruction based on quaternion models. This novel approach firstly applies label information of training samples, and emphasizes the role of training color face images with the same label which have a large variance. The numerical experiments indicate that SR2DCPCA has a higher face recognition rate than stateoftheart methods and is effective in image reconstruction.
References
References

[1]
J. Bai, Y. Wu, J.M. Zhang, F.Q. Chen, Subset based deep learning for RGBD object recognition, Neurocomputing 165 (2015) 280292.
 [2] N.L. Bihan, S.J. Sangwine, Quaternion principal component analysis of color images, in: Proc. 2003 Tenth IEEE Int. Conf. Image Processing (ICIP 2003), vol. 1, 2003, pp. 809812.
 [3] B.J. Chen, J.H. Yang, B. Jeon, X.P. Zhang, Kernel quaternion principal component analysis and its application in RGBD object recognition, Neurocomputing 266(29) (2017) 293303.

[4]
B.J. Chen, H.Z. Shu, G. Coatrieux, G. Chen, X.M. Sun, J.L. Coatrieux, Color image analysis by quaterniontype moments, J. Math. Imaging Vision 51 (1) (2015) 124144.

[5]
Y. Cheng, X. Zhao, K. Huang, T.N. Tan, Semisupervised learning and feature evaluation for RGBD object recognition, Comput. Vision Image Understanding 139 (2015) 149160.

[6]
T.A. Ell, L.B. Nicolas, S.J. Sangwine, Quaternion Fourier Transforms for Signal and Image Processing, John Wiley & Sons, 2014.
 [7] K.M. Hosny, M.M. Darwish, Highly accurate and numerically stable higher order QPCET moments for color image representation, Pattern Recogn. Lett. 97(2017) 2936.
 [8] Z.G. Jia, S.T. Ling, M.X. Zhao, Color twodimensional principal component analysis for face recognition based on quaternion model, LNCS, vol. 10361, 2017, pp. 177189.
 [9] Z.G. Jia, M. Wei, S.T. Ling, A new structurepreserving method for quaternion Hermitian eigenvalue problems, J. Comput. Appl. Math. 239(2013) 1224.
 [10] M. Kirby, L. Sirovich, Application of the karhunenloeve procedure for the characterization of human faces, IEEE Trans. Pattern Anal. Mach. Intell. 12 (1)(1990) 103108.

[11]
S.C. Pei, C.M. Cheng, Quaternion matrix singular value decomposition and its applications for color image processing, in: Proc. 2003 Int. Conf. Image Processing (ICIP 2003), vol. 1, 2003, pp. 805808.
 [12] A. Pentland, Looking at people: sensing for ubiquitous and wearable computing, IEEE Trans. Pattern Anal. Mach. Intell. 22 (1) (2000) 107119.
 [13] L. Shi, B. Funt, Quaternion color texture segmentation, Comput. Visual Image Understanding 107 (1) (2007) 8896.
 [14] L. Sirovich, M. Kirby, Lowdimensional procedure for characterization of human faces, J. Optical Soc. Am. 4(1987) 519524.
 [15] G.W. Stewart, Matrix Algorithms Volume II: Eigensystems, SIAM, 2001.

[16]
Y.F. Sun, S.Y. Chen, B.C. Yin, Color face recognition based on quaternion matrix representation, Pattern Recognit. Lett. 32(4)(2011) 597605.
 [17] J.H. Tang, L. Jin, Z.C. Li, S.H. Gao, RGBD object recognition via incorporating latent data structure and prior knowledge, IEEE Trans. Multimedia 17(11)(2015) 18991908 .
 [18] L. Torres, J.Y. Reutter, L. Lorente, The importance of the color information in face recognition, IEEE Int. Conf. Image Process. 3(1999) 627631.
 [19] M. Turk, A. Pentland, Eigenfaces for recognition. J. Cognitive Neurosci. 3(1)(1991) 7176.
 [20] A.R. Wang, J.W. Lu, J.F. Cai, T.J. Cham, G. Wang, Largemargin multimodal deep learning for rgbd object recognition, IEEE Trans. Multimedia 17 (11) (2015) 18871898.
 [21] H. Xue, Y. Liu, D. Cai, X. He, Tracking people in RGBD videos using deep learning and motion clues, Neurocomputing 204 (2016) 7076.
 [22] X. Xiang, J. Yang, Q. Chen, Color face recognition by PCAlike approach, Neurocomputing 152(2015) 231235.
 [23] J. Yang, D. Zhang, A. F. Frangi, J. Y. Yang, Twodimensional PCA: A new approach to appearancebased face representation and recognition, IEEE Trans. Pattern Anal. Mach. Intell. 26(1) (2004) 131137.
 [24] R. Zeng, J.S. Wu, Z.H. Shao, Y. Chen, B.J. Chen, L. Senhadji, H.Z. Shu, Color image classification via quaternion principal component analysis network, Neurocomputing 216(2016) 416428.
 [25] Z. Zhang, M.B. Zhao, B. Li, P. Tang, F.Z. Li, Simple yet effective color principal and discriminant feature extraction for representing and recognizing color images, Neurocomputing 149 (2015) 10581073 .
 [26] The Georgia Tech face database. http://www.anefian.com/research/ facereco.htm.
 [27] The color FERET database: https://www.nist.gov/itl/iad/imagegroup/colorferetdatabase.
Comments
There are no comments yet.