Robust Face Recognition via Adaptive Sparse Representation

04/18/2014 ∙ by Jing Wang, et al. ∙ National University of Singapore Hefei University of Technology 0

Sparse Representation (or coding) based Classification (SRC) has gained great success in face recognition in recent years. However, SRC emphasizes the sparsity too much and overlooks the correlation information which has been demonstrated to be critical in real-world face recognition problems. Besides, some work considers the correlation but overlooks the discriminative ability of sparsity. Different from these existing techniques, in this paper, we propose a framework called Adaptive Sparse Representation based Classification (ASRC) in which sparsity and correlation are jointly considered. Specifically, when the samples are of low correlation, ASRC selects the most discriminative samples for representation, like SRC; when the training samples are highly correlated, ASRC selects most of the correlated and discriminative samples for representation, rather than choosing some related samples randomly. In general, the representation model is adaptive to the correlation structure, which benefits from both ℓ_1-norm and ℓ_2-norm. Extensive experiments conducted on publicly available data sets verify the effectiveness and robustness of the proposed algorithm by comparing it with state-of-the-art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Face recognition has drawn intensive interest in pattern recognition for decades due to its wide real-world applications, such as video surveillance, person tracking, and access control

[1][2][3]. However, the images taken by the devices under the unconstrained environment are usually of limited quality. Various human facial expressions, poses, and illumination conditions affect the quality of face images, causing occlusion, translation and scale errors, etc. in the normalized face images, as shown in Figure 1. Furthermore, handling these problems in the high dimensional feature space or on an under-sampled dataset makes the task of face recognition even more challenging. Therefore, although some recognition methods have been proposed, and obtained great success during the past few years, robust face recognition methods with higher recognition performances are still desired.

One major type of face recognition methods adopt the holistic model, which identifies the label of the image by global representation [4]. These methods are also called appearance-based methods. The other type of methods utilize the component based model which first divides the face image into patches and then extracts features from each patch [5]. Finally the classification decision is made based on the similarity between patches of different images. These methods usually adopt manifold learning [6] and are suitable for under-sample problems [7]

. Some other researchers address the problem by feature extraction. Besides the classical features such as Eigenfaces

[8] and Gabor [9], some robust features are also proposed. For instance, Gradientfaces is an illumination insensitive measure and robust to various illuminations [10]. The subspace learning from gradient orientations was demonstrated to be robust to different noises for objective recognition [11]. Chan et al. [12]

proposed a novel descriptor based on Local Phase Quantization (LPQ) which is insensitive to blur. They extended the descriptor to a multiscale framework combined with Multiscale Local Binary Pattern to increase the insensitivity to illumination. Based on the feature space, existing classical classifiers can be used, such as NN (Nearest Neighbor), Support Vector Machines (SVM)

[13] and boosting [14]. NN belongs to the Nearest Feature based Classifiers (NFCs) which are based on the distance measure and have attracted much attention. However, the performance of NFCs may degrade sharply when the images of different classes are quite similar.

To overcome these limitations, sparse representation is employed in face recognition. Sparse Representation based Classification (SRC) [15] seeks a representation of the query image in terms of the over-complete dictionary, and then performs the recognition by checking which class yields the least representation error. Thus SRC can be considered as a generalization of NN and NFS [16], but it is more robust to occlusions and variations. [15] reported that SRC remains recognition rate for even randomly corrupted pixels. The striking performance of SRC boosts the research of sparse representation based face recognition and brings a promising research direction. For example, Elhamifar and Vidal [17] proposed a structured sparse representation, while Gao et al. [18] introduced a kernel sparse representation. Cheng et al. [19] introduced the -graph for image analysis, and Yang et al. [20] integrated sparse coding with linear spatial pyramid matching for image classification. Deng et al. [21] introduced an intraclass variant dictionary into SRC for undersampled face recognition, and Oritiz [2] proposed mean sequence sparse representation based classification for face recognition in movie trailers. Tzimiropoulos et al. [22] proposed sparse representation based on image gradient orientations for visual recognition and tracking. Unfortunately, SRC places too much emphasis on sparsity and overlooks the correlation within the dictionary which will lead to information loss. Especially, SRC will produce unstable results when the training samples are highly correlated. Some works have demonstrate the importance of correlation structure [23][24][25][26]. Specifically, Zhang [27] proposed CRC (Correlation Representation based Classifier) in consideration of the collaborative representation. CRC employed -norm to obtain a denser detector. However, the -norm does not perform sample selection which tends to include training samples of various classes and disturbs recognition results in some ways.

To address the above problems, we propose an Adaptive Sparse Representation based Classification (ASRC) method inspired by the work of trace norm [28]. Trace norm captures the correlations among multiple variables and has been employed in many applications. Yang et al. [29]

used the trace norm to find the sharing information among multiple tasks for feature selection. In

[30], the concepts adaptation was employed to assist event detection. Similarly, ASRC employs the trace norm on the dictionary and the representation vector to form a correlation adapter, which considers both correlation and sparsity. Our model is built to find a representation vector that minimizes the correlation regularizer. We also give theoretical analysis to prove that the regularizer balances the -norm and -norm with adaptive consideration of data structure. Thus our scheme can obtain a more accurate representation with the optimal discriminative and correlated training samples. To sum up, our major contributions are as follows:

  • In this paper, we propose an adaptive sparse linear model for face recognition with joint consideration of correlation and sparsity. Based on the model, a face recognition method ASRC is presented and it is adaptive to the exact structure of the dictionary.

    When the training samples are barely correlated, ASRC acts like SRC. When the training samples are highly correlated, ASRC is equivalent to CRC. In general, the sparsity of the representation vector obtained by ASRC is between the ones obtained by SRC and CRC.

  • We perform extensive experiments on three publicly available face data sets and fifteen UCI data sets. The experimental results demonstrate that ASRC is superior to existing state-of-the-art face recognition methods, such as NN, NFS, SRC, CRC and LSRC.

Fig. 1: Face image samples. (a) Face images with various expressions from the Yale database. (b) Face images with different poses from the ORL database. (c) Face images with different illuminations from the AR database.

The remainder of this paper is organized as follows. Section II reviews some related work, including nearest feature classifiers and sparse coding based algorithms. Section III introduces our algorithm and provides the solution for optimization. Extensive experimental results are presented and analyzed in Section IV. Finally, we conclude this paper in Section V.

Ii Related Work

In this section, we briefly introduce some works on the representation of images. We review the most popular face recognition methods, including Nearest Feature based Classifiers (NFCs) and sparse coding based methods. We mainly discuss the original SRC approach. Before introducing these methods, we first describe the problem and explain the notations used in this work.

Problem Description. Denote the data set of training samples labeled with the -th class as , where each element is a training sample, is the dimension of the feature space and is the total number of . Suppose the dictionary has classes of samples denoted as , and . Given a query sample , the pattern recognition task is to determine which class belongs to.

Notations. Scalars are denoted by lowercase letters, e.g. . Vectors are denoted by boldface lowercase letters, e.g. . Matrices are denoted by boldface capital letters, e.g. . Especially,

denotes the identity matrix. For a vector

, indicates converting the vector into a diagonal matrix in which the -th diagonal entry is . The various vector norms we use in this paper are as follows:

  • is the -norm, , the number of non-zero entries in the vector .

  • is the -norm, , the sum of the absolute values of all the entries, defined as .

  • is the -norm, defined as .

  • is the infinite norm, defined as .

For a matrix , indicates converting the matrix into a vector in which the -th entry is , the diagonal entries of . The -norm of is defined as , and is the trace norm,

, the sum of all the singular values of the matrix

.

Generally speaking, Nearest Feature based Classifiers aim to find a representation of the query image, and classify it to the best representer. According to the mechanism of representing the query image, NFCs include Nearest Neighbor (NN), Nearest Feature Line (NFL) [31], Nearest Feature Plane (NFP) [32] and Nearest Feature Subspace (NFS). More specifically, NN is the simplest one with no parameters which classifies the query image to its nearest neighbor. As NN adopts only one sample to represent the query image, its performance can be easily affected by noises. Li and Lu proposed the NFL classifier which forms a line by every two training samples of the same class and classifies the query image to its nearest line. Chien and Wu proposed the NFP classifier which uses at least three training samples of the same class to form a plane rather than a line to determine the label of the query image. NN, NFL and NFP all use a subset of the training samples with the same label to represent the query image, while NFS represents the query image by all the training samples of the same class. In general, the more samples are used for representation, the more stable a method is supposed to be. Hence, NFS is assumed to perform better than the other NFCs. Besides, NFCs are not robust in real-world face recognition applications because of various occlusions.

To overcome these limitations, Wright et al. introduced the Sparse Representation based Classification (SRC) method to represent the query image based on the over-complete dictionary with sparse coefficients,

(1)

The above -norm minimization problem is non-convex and actually NP-hard. It has been proved in [33][34] that problem (1) is equal to -minimization problem under certain conditions:

(2)

To deal with the noises, the -minimization problem is extended to the following formulation:

(3)

where is a given tolerance. It is generally regarded that SRC is an extension of NN and NFS. The difference is that the coding of is performed over the over-complete dictionary

instead of its subset. With a sufficient number of training samples, SRC with random projection-based features can outperform NFCs on conventional features. The sparse model is also more robust and effective for object recognition in the case that objects are corrupted by outliers.

However, SRC is under the assumption that the query image and the training images are well aligned. It is indicated that with sufficient training samples that cover nearly all the possible variations, can be correctly represented and robust to variations [35]. Thus, SRC may fail in the case that the query image is misaligned or the dictionary has a small number of samples. Meanwhile, due to the sparsity, SRC may have the instability problem when samples are highly correlated. If the subjects correlated to the query image look similar, the SRC method tends to select one at random for representation rather than selecting them all. It indicates that SRC fails to capture the correlation structure of the dictionary which is critical in face recognition [36].

Some works have focused on the above problem and provided several solutions. Wang et al. [23] proposed locality constraints in spatial sparse representation. Li et al. [37] maximized the intra-individual correlations to address the pose difference for face recognition. Zhang et al. imposed the -norm constraint on the coefficients and proposed CRC (Collaborative Representation based Classification with regularized least square). In [38], Zhang pointed out that the success of SRC comes from the collaborative representation of over all training samples. The -norm is supposed to take advantage of data correlation [39]. Thus, the query image is represented by the over-complete dictionary with -norm rather than -norm to constrain the coding vector in CRC. The objective function is defined as:

(4)

The -minimization guarantees that CRC gets a stable result with a much denser representation vector. However, CRC does not perform sample selection for representation. It may not perform well when the training data are not highly correlated.

To overcome the drawbacks of the above works, we propose a novel sparse representation method, called Adaptive Sparse Representation based Classification (ASRC). Rather than adopting the -norm and -norm, ASRC imposes the trace norm on the representation vector combined with the structure of the data matrix. We also impose the -norm on the noise part which makes our method robust to occlusions in the images. The details of our ASRC are presented in the following section.

Iii Adaptive Sparse Representation based Classification

In this section, we detail our adaptive representation model and provide the solution to the optimization. Then we describe our face recognition algorithm ASRC. By analyzing the properties of the ASRC, we show how to improve the performances of SRC and CRC with adaptive incorporation of both correlation and sparsity.

Iii-a Our Model

Variations in facial expressions or views, as well as occlusions in human face images make it challenging to build a robust representation model for the recognition of the query image . The sparsity is effective in sample selection for representation, and the correlation structure helps to find the relationship between the query image and training samples. Therefore, to benefit from both sparsity and correlation, we consider the structure of as well as the sparsity of the coding coefficient by a correlation adapter denoted as . The main difference between the trace norm and the existing norms is the inclusion of data matrix . To guarantee the discriminative nature of training samples selected for representation, we impose the trace norm on the adapter inspired by [28]. Then our linear representation model is represented as:

(5)

where is the correlation regularizer, denoted as . Here we discuss the adaptive reformulation of our model according to the exact structure of the dictionary. In the case that the subjects are distinct from each other, the columns in the dictionary matrix are orthogonal, . Then, we get the decomposition:

(6)

Thus, the correlation regularizer is equal to the -norm.

Then, problem (5) is the same as the sparse coding model:

(7)

In the case that the images of different subjects look similar to , that is and ( is a vector of size , where all the elements are one), we can get the reformulation of as:

(8)

and (5) can be formulated as:

(9)

We can see that (9) is essentially equal to CRC when dealing with highly correlated images.

Generally, the images in the dictionary are neither too independent of each other nor look the same. Our model can capture the correlation structure of the training images. In other words, the sparsity of the coefficient obtained by Eqn. (5) efficiently balances the and , denoted as:

(10)

The above equation indicates that obtained by is more sparse than the one obtained by , but not as sparse as the one obtained by . This means that our model can benefit from both the -minimization and -minimization according to the correlation in the dictionary.

Problem (5) is designed for the cases where the images have no noise. However, in real-world applications, the pixels of the images may be contaminated with occlusion and corruption. If we get the prior knowledge that the noises follow the Gaussian distribution, the original objective function can be reformulated as:

(11)

If the occlusion or corruption follows the Laplacian distribution, we consider the following problem instead:

(12)

Problem (12) is more robust to occlusion, corruption and variations than problem (11). Problem (12) is equivalent to the following problem:

(13)

where is the regularization parameter. We show how to solve (14) in the next subsection.

Input: data matrix , parameter .
Initialize: , , , , , , , and .
while not converge do

  1. fix the others and update by

  2. fix the others and update by

    where .

  3. fix the others and update by

  4. update the multipliers

  5. update the parameter by

  6. check the convergence conditions

    end while

Algorithm 1 Solving Problem (13) by ADM

Iii-B Optimization

Inspired by the optimization method used in [40][41], we adopt Alternating Direction Method (ADM) [42] to solve problem (13). We first convert it to the following equivalent problem:

(14)

Problem (14) can be solved by solving the following augmented Lagrange multiplier problem:

(15)

where and are Lagrange multipliers and is a parameter. This problem is unconstrained and can be minimized with respect to , and , respectively, by fixing the other variables, and then updating and .

Updating when and are fixed in Step 1 is equivalent to solving the following problem:

(16)

The above problem has a closed form solution by the Singular Value Thresholding (SVT) operator [43].

Updating when fixing and in Step 2 is equivalent to solving the following problem:

(17)

The above problem can be easily solved by:

(18)

where .

Updating when fixing and in Step 3 is equivalent to solving the following problem:

(19)

The solution to the above problem can be obtained by the soft-thresholding (shrinkage) operator [44]. The whole algorithm for solving problem (13) is outlined in Algorithm 1.

Iii-C Adaptive Sparse Representation based Classification

Input: data matrix , the query image .
Output: the identity of the query image .

  1. Normalize each column of to have unit -norm.

  2. Code over by solving

  3. Compute the residuals

    where is the coding coefficient vector associated with .

  4. Predict the identity of by

Algorithm 2 Adaptive Sparse Representation based Classification

Based on the robust model for coding of query images defined in problem (13), we present our Adaptive Sparse Representation based Classification (ASRC) method for face recognition. The scheme of the algorithm is described in Algorithm 2. First, we normalize each column of the dictionary matrix in Step 1. Given the query image , we code it over the whole dictionary with correlation regularizer in Step 2. The optimal coefficient can be solved by ADM described in Algorithm 1. As the correlation regularizer is adaptive to the structure of the dictionary, the sparsity of the coefficient balances the coefficients obtained by the -minimization and -minimization. Thus, the nonzero entities of will focus on discriminative training samples. In Step 3, we calculate the residuals of different subjects. The subject to which belongs will give a better representation, leading to smaller representation error. Then, the query image will be assigned to the class with the least residual in Step 4. It can be seen that the time complexity of the algorithm is .

Based on the above analysis, we can observe that our algorithm has some attractive properties:

  • First, our algorithm can obtain an accurate representation of the query image according to the structure of the dictionary, which is critical for representation based methods in face recognition.

  • Second, ASRC can benefit from both the discriminative nature of -norm and the collaborative representation of -norm, which will guarantee a good recognition performance in most cases.

  • Third, due to the adaptive correlation regularizer, the information loss, such as misalignment, pixel corruption or insufficient training samples, can be compensated by the correlation of training samples. Even if the dictionary has limited training samples per class, we can still obtain an accurate representation of the query image compared with sate-of-the-art methods, such as SRC and CRC.

Iv Experiments

In this section, we first investigate the recognition performance and robustness of our proposed ASRC method for face recognition. Later, we design experiments to test the generalization ability of ASRC on general pattern recognition problems. Two sets of databases are used in the experiments, one from the real-world face image databases, including Yale [45], ORL [46] and AR [47], and the other sampled from the UCI repository [48]. Table 1 summarizes these data sets in terms of the numbers of classes, dimensions and samples. Some exemplar face images are shown in Figure 2. For each face image database, we choose images of each subject for training and the rest for test. On the UCI data sets, we adopt 10-fold cross validation.

In the following subsections, we test the recognition performance and robustness of our algorithm by comparing it with some methods like SVM (linear kernel) [13], SRC, NN, NFS, CRC, and LSRC (Locality-constrained Sparse Representation based Classifier) [23]. We use -test to test the statistic significance of the results. The significance level is set to be 0.05. We adopt PCA for feature extraction on the face image data sets.

Iv-a Face Recognition Without Occlusion

Face Image Data Sets
Data Set class dim. instance
Yale 15 1024 165
ORL 40 1024 400
AR 100 3168 1400
UCI Data Sets
Data Set class dim. instance
UCI Data Sets
Diabetes 2 9 768
Breast 2 10 277
Breast_gy 2 10 277
Heart 2 13 270
Hearts 2 13 270
Cleve 2 14 296
Vote 2 17 435
German 2 25 1000
Ionosphere 2 35 351
Spectf 2 44 267
Wdbc 2 31 267
Air 3 65 359
X8D5K 5 9 1000
Glass 6 10 214
Glass_gy 6 10 214
TABLE I: Description of 18 data sets
Fig. 2: Face image samples. (a) Two subjects in the Yale database. (b) One subject in the ORL database. (c) One subject in the AR database.

a) Results on Yale Database

The Yale database contains 165 images of 15 individuals with various facial expressions or illuminations, as shown in Figure 2a. The images are taken under different emotions of the subject, such as sad, happy and surprised. They are cropped to 3232 pixels and converted to gray scale. In the experiment, we choose a random subset with (= 4, 5, 6, 7) images per subject to form the training set and take the rest for test. We select different feature space dimensions for various values. For each given

, we repeat the experiment over 10 random splits of the data set and record the average accuracy in Figure 3. We also provide the maximal accuracy and the standard deviation of each algorithm on different

values in Table II.

(a) = 4
(b) = 5
(c) = 6
(d) = 7
Fig. 3: Comparison recognition rates based on images of each subject for training on the Yale database.
Algorithms = 4 = 5 = 6 = 7
SVM 64.002.57 (59) 69.895.48 (70) 72.673.28 (89) 78.506.73 (104)
NN 55.714.65 (59) 52.114.30 (20) 57.874.92 (20) 59.503.69 (104)
NFS 56.765.30 (59) 57.004.74 (70) 61.335.96 (89) 64.505.21 (104)
SRC 70.864.56 (59) 72.004.02 (70) 79.473.68 (89) 79.173.17 (104)
CRC 70.954.67 (50) 73.114.79 (60) 80.933.93 (70) 81.173.93 (50)
LSRC 71.242.49 (50) 76.223.93 (70) 78.403.86 (89) 85.005.56 (104)
ASRC 76.674.71 (59) 77.004.26 (60) 82.532.67 (89) 83.174.89 (104)
TABLE II: The maximal average accuracy and the standard deviation of different algorithms on the Yale database vs. the dimension of the feature space when the maximal accuracy is obtained.

It can be seen from Figure 3 that ASRC outperforms the other methods at all levels. The improvement gain of ASRC appears to be more significant when the number of training samples is limited ( = 4, 5). The reason is that ASRC considers both the correlation and sparsity. Even with insufficient training samples, the variations in the query image can be captured by selecting sufficient correlated training samples. Meanwhile, compared with CRC, ASRC only chooses the most discriminative samples for representation which will lead to higher recognition rates. From Figure 3c and Figure 3d, when there are relatively enough training samples, we can see that ASRC, SRC and CRC all perform well. SRC, LSRC and CRC have similarly good results. LSRC outperforms SRC in most cases and obtains better results than both SRC and CRC when . The reason is that LSRC considers the local information of the dictionary. Though LSRC obtains better performance, LSRC, SRC and CRC are all inferior to our method. This is because our algorithm ASRC considers the exact structure information of the dictionary and makes the model adaptive to the structure. Thus, with complementary information, our algorithm obtains the best results. Compared with NN and NFS, the recognition rates of our method are at least 10% higher. SVM obtains much better results compared with NN and NFS, but is still inferior to our algorithm. To sum up, the experimental results show that the adaptive balance between sparsity and correlation does contribute to face recognition significantly.

b) Results on ORL Database

The ORL database contains face images of 40 distinct subjects captured in different time with variations in illumination, facial expression and details (glasses), as shown in Figure 2b. There are no restrictions imposed on the expression but the side movement or tilt is controlled within 20 degrees. For each subject, we select (= 2, 3, 4, 5) images for training and use the rest for test. The average accuracy rates versus selected features are recorded over 10 random splits, summarized in Figure 4. We also report the maximal accuracy and the standard deviation of each algorithm on the ORL database in Table III.

(a) = 2
(b) = 3
(c) = 4
(d) = 5
Fig. 4: Comparison recognition rates based on images of each subject for training on the ORL database.
Algorithms = 2 = 3 = 4 = 5
SVM 72.384.00 (79) 83.251.94 (119) 89.671.90 (120) 93.201.60 (199)
NN 71.593.23 (79) 76.362.37 (119) 81.461.89 (30) 85.702.37 (30)
NFS 71.193.48 (79) 80.641.70 (119) 88.542.03 (120) 92.201.72 (120)
SRC 80.282.52 (60) 86.291.58 (119) 92.370.88 (120) 94.701.44 (199)
CRC 80.442.41 (60) 86.392.07 (90) 91.211.72 (90) 93.752.12 (199)
LSRC 79.812.46 (60) 87.141.87 (119) 92.001.22 (90) 94.002.12 (150)
ASRC 81.692.86 (79) 88.682.03 (119) 93.000.87 (159) 95.852.12 (150)
TABLE III: The maximal average accuracy and the standard deviation of different algorithms on the ORL database vs. the dimension of the feature space when the maximal accuracy is obtained.

From Figure 4 and Table III, we can see that ASRC obtains the best recognition rates at all levels. The performances of all the methods improve as the training samples increase, and our method always remains the best. When there are insufficient training samples ( = 2), SRC, CRC and LSRC have similar performances. The reason is that even though CRC considers the correlation, it does not perform sample selection for representation which will disturb the final recognition results. LSRC only considers the local information which is limited with insufficient training samples. Our method takes advantage of the correlation of the query image and the training samples, thus it can obtain relatively more information. Compared with these methods, our method balances the sparsity and the correlation. Thus, ASRC can obtain stable and relatively better recognition rates. From Table III, we can see that the algorithms may not obtain the maximum average accuracy when the dimension is the largest. For instance, in the case of , NN obtains the maximum average accuracy when the dimension of the feature space is 30, SVM, NFS and SRC when 120, and CRC and LSRC when 90. Our algorithm always obtains the best result when the dimension reaches the largest, and the recognition rates of our accuracy are also higher than the others.

c) Results on AR Database

The AR database consists of over 4000 frontal images of 126 subjects. In this experiment, a subset (with only illumination and expression changes) that contains 50 male subjects and 50 female subjects is chosen from the AR database, as shown in Figure 2c. The images are cropped to 165120 pixels. For each subject, we choose (= 2, 7) images for training and take the rest for test. The experimental results are shown in Figure 5 and Table IV.

(a) = 2
(b) = 7
Fig. 5: Comparison recognition rates based on images of each subject for training on the AR database.

It can be seen from Figure 5 and Table IV that our method still obtains the best results in all situations. The performance of NFS is very unstable. More specifically, in the case where , NFS outperforms SRC, LSRC, SVM and NN. In another case where , NFS is inferior to all the other algorithms when the dimension is less than 250. This is because NFS only depends on the representation of subspaces which is easily affected by the disturbance of the training samples. CRC and LSRC obtain relatively much stable results in both cases compared with SRC, as more information is considered. Most of the algorithms obtain the maximum average accuracy when the dimension of the feature space reaches the largest. Generally speaking, with sufficient features, the algorithms can obtain much better results. More specifically, the maximum average accuracy of ASRC when is , and it reaches when .

Based on the results shown in Figures 3, 4 and 5 and the Tables II, III and IV, we can draw the following conclusions:

  • For all the three databases, the best performance of ASRC consistently exceeds those of the competing methods. More specifically, when , the best recognition rate for ASRC on the Yale database is 76.67 , compared to 55.57 for NN, 56.76 for NFS, 70.86 for SRC, 70.95 for CRC and 71.24 for LSRC; when , the best rate for ASRC on the ORL database is 95.58 , compared to 85.7 for NN, 92.2 for NFS, 94.7 for SRC, 93.75 for CRC and 94.00 for LSRC; when , the best rate for ASRC on the AR database is 75.5 , 66.83 for NN, 41.42 for NFS, 75.25 for SRC, 75.25 for CRC and 74.83 for LSRC.

  • Generally, SRC, CRC and LSRC have stable good performances in most cases. Among them, when the training samples are insufficient (Figure 3a and Figure 4a), CRC tends to have better performance due to relatively sufficient information as more samples will be selected for presentation. LSRC outperforms the other two algorithms as shown in Figure 3b and Figure 4b, due to its consideration of both sparsity and locality. However, the local information is not sufficient. Our method ASRC, which considers the exact structure and correlation information of the dictionary, yields the best recognition results in most cases. It also outperforms the classical SVM algorithm and other NFCs. Thus the experimental results demonstrate that our algorithm outperforms state-of-the-art face recognition methods.

Iv-B Face Recognition Despite Random Pixel Corruption

In this subsection, we test the robustness of ASRC on the three face data sets. The images in Yale and ORL are resized to 1616 pixels, and the images in AR are resized to 6648 pixels. For the Yale database, we choose 6 images per subject for training and use the rest for test. For ORL and AR databases, we randomly select half of the images for training and use the other half for test.

Algorithms = 2 = 7
SVM 65.50 (180) 88.57 (540)
NN 41.42 (180) 80.14 (130)
NFS 75.17 (180) 75.14 (540)
SRC 66.83 (180) 92.86 (540)
CRC 75.25 (180) 94.57 (540)
LSRC 74.83 (180) 93.86 (540)
ASRC 75.50 (180) 94.71 (540)
TABLE IV: The maximal average accuracy and the standard deviation of different algorithms on the AR database vs. the dimension of the feature space when the maximal accuracy is obtained.

We corrupt a certain percentage of the randomly chosen pixels in each of the test images, replacing their values with independent distributed samples from a uniform distribution. The corrupted pixels are randomly chosen for each test image. The percentage of corrupted pixels varies from 10

to 80 . Figure 6 shows several example test images of three subjects.

We compare our method with four popular techniques to test its robustness. Figure 6 plots the recognition performance of ASRC and its 6 competitors over various levels of corruption. From Figure 6a which depicts the comparison results on the Yale database, we can see that the proposed algorithm dramatically outperforms others. For 0 up to 20 occlusion, our algorithm recognizes the subjects with recognition rate of over 80 . For 30 to 40  occlusion, the recognition rates of our algorithm are over 10 higher than other competitors. For roughly 50 and 60 occlusion, ASRC still obtains the best recognition rates, at least 3 higher than other competitors. On the ORL database (Figure 6b), our algorithm achieves the recognition rate of over 85 when the occlusion increases from 0 to 30 . At 40 corruption, none of the compared algorithms achieves higher than 70 recognition rate, while the proposed algorithm achieves 78 . Even at 50  occlusion, the recognition rate is still over 60 . Figure 6c plots the recognition performance of ASRC and its competitors on the AR database. From 0 up to 40  occlusion, ASRC and SRC correctly classify the subjects with recognition rate of around 80 , much better than the other methods. In some cases on the Yale and AR databases, especially when the occlusion reaches 60 , some algorithms (LSRC, SRC) reach slightly higher recognition rates than ASRC. However, the difference is not statistically significant.

(a) Yale
(b) ORL
(c) AR
Fig. 6: The recognition rate across the entire range of corruption for ASRC and its competing algorithms on (a) the Yale database, (b) the ORL database, and (c) the AR database.

The results indicate that the correlation information can compensate the corrupted part in the query images. The SRC approach is supposed to be robust to occlusions, as only a fraction of coefficients will be corrupted by the occlusion by using -minimization. However, the -minimization requires a large sum of training samples. Thus, SRC obtains a relatively better recognition performance on the AR database with 700 images for training. CRC uses the -minimization to express occluded images, and most of the coefficients will be corrupted. Thus, CRC is not as robust as SRC and ASRC on two of the three databases (Figure 6b and Figure 6c). We can see that our method outperforms the other algorithms on Yale and ORL databases in most cases. On the AR database, our method performs as well as SRC and better than other methods. This is because the correlation structure suffices the corruption recovery in some ways. Thus, properly harnessing sparsity and correlation will improve the robustness.

Iv-C Experimental Results on UCI Data Sets

Data Sets SVM NN NFS SRC CRC LSRC ASRC
Diabetes 76.964.97 72.243.90 66.053.27 65.135.56 66.972.74 61.083.03 68.954.65
Breast 70.388.02 68.528.24 70.370.00 69.636.94 70.370.00 29.315.88 73.705.64
Breast_gy 70.695.88 68.899.11 72.596.34 70.007.50 75.935.59 28.575.78 75.565.84
Heart 85.195.52 79.266.10 81.114.77 78.157.70 82.595.25 77.047.96 85.936.49
Hearts 85.195.52 74.074.94 80.745.18 71.116.00 84.814.43 77.047.96 85.565.37
Cleve 81.678.58 71.727.05 77.245.91 70.349.91 81.0310.57 80.246.14 82.079.02
Vote 93.663.63 91.903.01 76.674.60 75.246.66 92.623.63 61.416.01 93.104.41
German 77.001.94 67.903.51 70.000.00 75.803.61 74.602.91 28.504.14 75.604.95
Ionosphere 64.0022.46 64.710.00 48.537.10 90.296.51 93.533.62 36.0022.46 94.413.24
Spectf 80.0733.80 78.156.8 78.527.37 68.1511.61 79.2611.61 79.7734.07 84.077.21
Wdbc 96.662.85 95.362.10 88.391.26 92.682.14 92.322.53 62.5118.28 96.432.06
Air 95.322.83 96.763.24 91.764.11 94.714.56 93.244.81 97.262.20 93.823.52
X8D5K 100.000.00 100.000.00 98.101.20 99.900.32 100.000.00 100.000.00 100.000.00
Glass 37.338.68 22.228.28 33.898.05 30.0012.88 38.335.52 33.287.14 37.225.89
Glass_gy 38.296.76 27.787.86 37.789.00 32.229.00 40.569.09 26.0610.01 40.008.20
TABLE V: Experimental results on UCI data sets

In order to test the effectiveness of our algorithm on general pattern recognition problems, we conduct experiments on 15 data sets selected from the UCI repository. The details of the data sets are described in Table 1. On the UCI data sets, we adopt 10-fold cross validation and record the mean and standard deviation of accuracy. The experimental results are shown in Table 2 with the best results highlighted in bold.

From the results in Table V, we can see that ASRC outperforms NFS on all the data sets. Compared with SRC, our algorithm wins 14 out of 15 times except on the data set German (0.2%). Compared to NN, ASRC wins 12 times, and ties once on the data set X8D5K. The accuracy rates of CRC are higher than ASRC on three data sets, but the difference is not statistically significant. The performance of LSRC is not stable which limits its applications in general pattern recognition problems. SVM obtains the best results compared with the others for it is a classical classifier, but it is still inferior to our algorithm for 8 times. To sum up, all the experimental results demonstrate that our algorithm is an efficient pattern recognition method.

Experimental results on face image databases validate the effectiveness of our method on face recognition. The correlation information among training images can compensate the pixel occlusion, image misalignment and variations. Meanwhile, the adaptive integration of the dictionary correlation and sparsity also helps handle general pattern recognition problems, which is validated on the UCI data sets.

V Conclusion

In this paper, we propose an Adaptive Sparse Representation based Classification (ASRC) method for face recognition. Different from SRC and CRC, ASRC considers both sparsity and correlation in representation. Considering the sparsity, ASRC selects the most discriminative samples for presentation. With the correlation information, ASRC can rectify the occlusion, corruption or variations by training images of other subjects. ASRC can obtain comparable results to SRC when the dictionary is with low correlation, and performs as well as CRC when the data are with high correlation. In other cases, ASRC will obtain an accurate linear representation with the most related and discriminative samples which will guarantee the good recognition performance. Compared with LSRC, our algorithm considers complementary correlation information and is adaptive to the structure of the dictionary. Experimental results on real-world face image data sets show that our method outperforms state-of-the-art face recognition methods, such as SVM, NN, NFS, SRC, CRC and ASRC in terms of recognition precision and robustness. Meanwhile, ASRC can also be treated as a robust classifier and solve other problems such as motion segmentation, activity recognition, subspace learning and so on. The experiments conducted on benchmark data sets from the UCI repository also validate the efficiency of our method in general pattern recognition problems. Thus, our algorithm can be applied not only in face recognition, but also in other important tasks, such feature selection, event detection in multimedia.

Acknowledgment

The authors would like to thank the anonymous reviewers for their constructive comments. This work is supported by the National 973 Program of China under grant 2014CB347600, the NSFC under grant nos. 61273292, 61272393, 61322201, the Specialized Research Fund for the Doctoral Program of Higher Education under grant 20130111110011, the Program for New Century Excellent Talents in University under grant NCET-12-0836, the Open Project Program of the National Laboratory of Pattern Recognition (NLPR), and Singapore Ministry of Education under research Grant MOE2010-T2-1-087. J. Wang would like to thank the China Scholarship Council for their support.

References

  • [1] W. Zhao, R. Chellappa, P. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” Acm Computing Surveys (CSUR), vol. 35, no. 4, pp. 399–458, 2003.
  • [2] E. G. Ortiz, A. Wright, and M. Shah, “Face recognition in movie trailers via mean sequence sparse representation-based classification,” in

    IEEE Conference on Computer Vision and Pattern Recognition

    , 2013.
  • [3] O. Ocegueda, T. Fang, S. Shah, and I. Kakadiaris, “3d-face discriminant analysis using gauss-markov posterior marginals,” vol. 3, no. 35, pp. 728–739, 2013.
  • [4] R. Brunelli and T. Poggio, “Face recognition: Features versus templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1042–1052, 1993.
  • [5] B. Heisele, P. Ho, J. Wu, and T. Poggio, “Face recognition: component-based versus global approaches,” Computer Vision and Image Understanding, vol. 91, no. 1, pp. 6–21, 2003.
  • [6] E. Kokiopoulou and Y. Saad, “Orthogonal neighborhood preserving projections: A projection-based dimensionality reduction technique,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 12, pp. 2143–2156, 2007.
  • [7] J. Lu, Y. Tan, and G. Wang, “Discriminative multi-manifold analysis for face recognition from a single training sample per person,” in IEEE International Conference on Computer Vision, 2011, pp. 1943–1950.
  • [8] M. Turk and A. Pentland, “Face recognition using eigenfaces,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991, pp. 586–591.
  • [9] J. G. Daugman et al., “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” Optical Society of America, Journal, A: Optics and Image Science, vol. 2, no. 7, pp. 1160–1169, 1985.
  • [10] T. Zhang, Y. Tang, B. Fang, Z. Shang, and X. Liu, “Face recognition under varying illumination using gradientfaces,” IEEE Transactions on Image Processing, vol. 18, no. 11, pp. 2599–2606, 2009.
  • [11] G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “Subspace learning from image gradient orientations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 12, pp. 2454–2466, 2012.
  • [12] M. Chan, C. T., J. Kittler, and M. Pietikainen, “Multiscale local phase quantisation for robust component-based face recognition using kernel fusion of multiple descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 5, pp. 1164–1177, 2013.
  • [13] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, pp. 27:1–27:27, 2011, software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
  • [14] Y. Freund, R. Schapire, and N. Abe, “A short introduction to boosting,”

    Journal-Japanese Society For Artificial Intelligence

    , vol. 14, no. 771-780, p. 1612, 1999.
  • [15] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2009.
  • [16] S. Shan, W. Gao, and D. Zhao, “Face identification from a single example image based on face-specific subspace (fss),” in IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 2, 2002, pp. 2125–2128.
  • [17] E. Elhamifar and R. Vidal, “Robust classification using structured sparse representation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 1873–1879.
  • [18] S. Gao, I. Tsang, and L. Chia, “Kernel sparse representation for image classification and face recognition,” in European Conference on Computer Vision, 2010, pp. 1–14.
  • [19] B. Cheng, J. Yang, S. Yan, and T. Fu, “Learning with l1-graph for image analysis,” IEEE Transactions on Image Processing, vol. 19, no. 4, pp. 858–866, 2010.
  • [20] J. Yang, K. Yu, Y. Gong, and T. Huang, “Linear spatial pyramid matching using sparse coding for image classification,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 1794–1801.
  • [21] W. Deng, J. Hu, and J. Guo, “Extended src: Undersampled face recognition via intraclass variant dictionary,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 9, pp. 1864–1870, 2012.
  • [22] G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “Sparse representations of image gradient orientations for visual recognition and tracking,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2011, pp. 26–33.
  • [23] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong, “Locality-constrained linear coding for image classification,” in IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 3360–3367.
  • [24] J. Lu, Y. Tan, G. Wang, and Y. Gao, “Image-to-set face recognition using locality repulsion projections and sparse reconstruction-based similarity measure,” 2013.
  • [25] M. Wang, Y. Gao, K. Lu, and Y. Rui, “View-based discriminative probabilistic modeling for 3d object retrieval and recognition,” IEEE Transactions on Image Processing, vol. 22, no. 4, pp. 1395–1407, 2013.
  • [26] M. Wang, B. Ni, X.-S. Hua, and T.-S. Chua, “Assistive tagging: A survey of multimedia tagging with human-computer joint exploration,” ACM Computing Surveys, vol. 44, no. 4, p. 25, 2012.
  • [27] R. Rigamonti, M. Brown, and V. Lepetit, “Are sparse representations really relevant for image classification?” in IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 1545–1552.
  • [28] G. Edouard, O. Guillaume, and B. Francis, “Trace lasso: a trace norm regularization for correlated designs,” in Advances in Neural Information Processing Systems, 2011, pp. 2187–2195.
  • [29] Y. Yang, Z. Ma, A. Hauptmann, and N. Sebe, “Feature selection for multimedia analysis by sharing information among multiple tasks,” IEEE Transactions on Multimedia, vol. 15, no. 3, pp. 661–669, 2013.
  • [30] Z. Ma, Y. Yang, Y. Cai, N. Sebe, and A. G. Hauptmann, “Knowledge adaptation for ad hoc multimedia event detection with few exemplars,” in Proceedings of the 20th ACM international conference on Multimedia.   ACM, 2012, pp. 469–478.
  • [31] S. Li and J. Lu, “Face recognition using the nearest feature line method,”

    IEEE Transactions on Neural Networks

    , vol. 10, no. 2, pp. 439–443, 1999.
  • [32] J. Chien and C. Wu, “Discriminant waveletfaces and nearest feature classifiers for face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 12, pp. 1644–1649, 2002.
  • [33] D. Donoho, “For most large underdetermined systems of linear equations the minimal l(1)-norm solution is also the sparsest solution,” Communications on pure and applied mathematics, vol. 59, no. 6, pp. 797–29, 2006.
  • [34] E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on pure and applied mathematics, vol. 59, no. 8, pp. 1207–1223, 2006.
  • [35] A. Wagner, J. Wright, A. Ganesh, Z. Zhou, and Y. Ma, “Towards a practical face recognition system: Robust registration and illumination by sparse representation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 597–604.
  • [36] H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 2, pp. 301–320, 2005.
  • [37] A. Li, S. Shan, X. Chen, and W. Gao, “Maximizing intra-individual correlations for face recognition across pose differences,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 605–611.
  • [38] L. Zhang, M. Yang, and X. Feng, “Sparse representation or collaborative representation: Which helps face recognition?” in IEEE International Conference on Computer Vision, 2011, pp. 471–478.
  • [39] C.-Y. Lu, H. Min, Z.-Q. Zhao, L. Zhu, D.-S. Huang, and S. Yan, “Robust and efficient subspace segmentation via least squares regression,” in ECCV, 2012.
  • [40]

    E. J. Candes, X. D. Li, Y. Ma, and J. Wright, “Robust principal component analysis?”

    Journal of the ACM, vol. 58, no. 3, 2011.
  • [41] G. Liu, Z. Lin, and Y. Yu, “Robust subspace segmentation by low-rank representation,” in ICML, 2010, pp. 663–670.
  • [42] Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of a corrupted low-rank matrices,” UIUC Technical Report UILU-ENG-09-2215, Tech. Rep., 2009.
  • [43] J.-F. Cai, E. J. Cand‘es, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010.
  • [44] E. T. Hale, W. Yin, and Y. Zhang, “Fixed-point continuation for -minimization: Methodology and convergence,” SIAM Journal on Optimization, vol. 19, no. 3, pp. 1107–1130, 2008.
  • [45] P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, 1997.
  • [46] F. Samaria and A. Harter, “Parameterisation of a stochastic model for human face identification,” in IEEE Workshop on Applications of Computer Vision, 1994, pp. 138–142.
  • [47] A. Martinez, “The ar face database,” CVC Technical Report, vol. 24, 1998.
  • [48]

    K. Bache and M. Lichman, “UCI machine learning repository,” 2013. [Online]. Available:

    http://archive.ics.uci.edu/ml