Face recognition (FR) has been an active field of research in biometrics for over two decades . Current methods work well when the test images are captured under controlled conditions. However, quite often the performance of most algorithms degrades significantly when they are applied to the images taken under uncontrolled conditions where there is no control over pose, illumination, expressions and resolution of the face image. Image resolution is an important parameter in many practical scenarios such as surveillance where high resolution cameras are not deployed due to cost and data storage constraints and further, there is no control over the distance of human from the camera.
Many methods have been proposed in the vision literature that can deal with this resolution problem in FR. Most of these methods are based on application of super-resolution (SR) technique to increase the resolution of images so that the recovered higher-resolution (HR) images can be used for recognition. One of the major drawbacks of applying SR techniques is that there is a possibility that recovered HR images may contain some serious artifacts. This is often the case when the resolution of the image is very low. As a result, these recovered images may not look like the images of the same person and the recognition performance may degrade significantly.
In practical scenarios, the resolution change is also coupled with other parameters such as pose change, illumination variations and expression. Algorithms specifically designed to deal with LR images quite often fail in dealing with these variations. Hence, it is essential to include these parameters while designing a robust method for low-resolution FR. To this end, in this paper, we present a generative approach to low-resolution FR that is also robust to illumination variations based on learning class specific dictionaries. One of the major advantages of using generative approaches is that they are known to have reduced sensitivity to noise than the discriminative approaches . Furthermore, we kernelize the learning algorithm to handle non-linearity in the data samples and introduce a bi-level sparse coding framework for robust recognition.
Training stage of our method consists of three main steps. In the first step of the training stage, given HR training samples from each class, we use an image relighting method to generate multiple images of the same subject with different lighting so that robustness to illumination changes can be realized. In the second step, the resolution of the enlarged gallery images from each class is matched with that of the probe image. Finally, in the third step, class and resolution specific dictionaries are trained for each class. For the testing phase, a novel LR image is projected onto the span of the atoms in each learned dictionary. The residual vectors are then used to classify the subject. A flowchart of the proposed algorithm is shown in figure1.
I-a Paper organization
Ii Previous Work
In this section, we review some of the recent FR methods that can deal with poor resolution. These techniques can be broadly divided into the following categories.
Ii-a SR-based approaches
SR is the method of estimating HR imagegiven downgraded image . The LR image model is often given as
where and are the downsampling matrix, the blurring matrix and the noise, respectively. Earlier works for solving the above problem were based on taking multiple LR inputs and combining them to produce the HR image. A classical work by Simon and Baker  showed that the methods using multiple LR images using smooth priors would fail to produce good results as the resolution factor increases. They also proposed a face hallucination method for super-resolving face images. Subsequently, there have been works using single image for SR such as example-based SR , SR using neighborhood embedding  and sparse representation-based SR . While these methods can be used for super-resolving the face images and subsequent recognition, methods have also been proposed for specifically handling the problem for faces.
In particular, an eigen-face domain SR method for FR was proposed by Gunturk et al. in . This method proposes to solve the FR at LR using SR of multiple LR images using their PCA domain representation. Given an LR face image, Jia and Gong 
propose to directly compute a maximum likelihood identity parameter vector in the HR tensor space that can be used for SR and recognition. Hennings-Yeomanset al.  presented a Tikhonov regularization method that can combine the different steps of SR and recognition in one step. Wilman et al.  proposed a relational learning approach for super-resolution and recognition of low resolution faces.
Ii-B Metric learning-based approaches
Though the LR are directly not suitable for face recognition purpose, it is also not necessary to super-resolve the image before recognition, as the problem of recognition is not the same as SR. Based on this motivation, some different approaches to this problem have been suggested. Coupled Metric Learning  attempts to solve this problem by mapping the LR image to a new subspace, where higher recognition can be achieved. A similar approach for improving the matching performance of the LR images using multidimensional scaling was recently proposed by Biswas et al. in [12, 13, 14]. Further, Ren et al.  used coupled kernel methods for low resolution recognition. A coupled Fisher analysis method was proposed by Sienna et al . Lei et al . also proposed a coupled discriminant analysis framework for heterogenous face recognition.
Ii-C Other methods
There have been works to solve the problem of unconstrained FR using videos. In particular, Arandjelovic and Cipolla  use a video database of LR face images with a variability in pose and illumination. Their method combines a photometric model of image formation with a statistical model of generic face appearance variation to deal with illumination. To handle pose variation, it learns local appearance manifold structure and a robust same-identity likelihood.
A change in resolution of the image changes the scale of the image. Scale change has a multiplicative effect on the distances in image. Hence, if the image is represented in log-polar domain, a scale change will lead to a translation in the said domain. Based on this, a FR approach has been suggested by Hotta et al. in  to make the algorithm scale invariant. This method proposes to extract shift-invariant features in the log-polar domain.
Additional methods for LR FR include correlation filter-based approach [KerCor] and a support vector data description method . 3D face modelling has also been used to address the LR face recognition problem  . Choi et al  make an interesting study on the use of color for degraded face recognition.
Iii Proposed Approach
In this section, we present the details of our proposed low-resolution FR algorithm based on learning class specific dictionaries.
Iii-a Image Relighting
As discussed earlier, the resolution change is usually coupled with other parameters such as illumination variation. In this section, we introduce an image relighting method that can deal with this illumination problem in LR face recognition. The idea is to capture various illumination conditions using the HR training samples, and subsequently use the expanded gallery for recognition at low resolutions.
Assuming the Lambertian reflectance model for facial surface, the HR intensity image is given by the Lambert’s cosine law as follows:
where is the pixel intensity at location , is the light source direction, is the surface albedo at location , is the surface normal of the corresponding surface point. Given the face image, , image relighting involves estimating , and , which is an extremely ill-posed problem. To overcome this, we use 3D facial normal data  to first estimate an average surface normal, . Further, the model is non-linear due to the term in (1). However, the shadow points do not reveal any information about albedo. Hence, we neglect the term in further discussion. The albedo, and source directions can now be estimated as follows:
The source direction can be estimated using following a linear Least Squares approach :
An inital estimate of albedo, can be obtained as:
The final albedo estimate is obtained using minimum mean square approach based on Wiener filtering framework :
where, denotes the minimum mean square estimate (MMSE) of the albedo.
Using the estimated albedo map, and average normal, we can generate new images under any illumination condition using the image formation model (1). It was shown in  that an image of an arbitrarily illuminated object can be approximated by a linear combination of the image of the same object in the same pose, illuminated by nine different light sources placed at preselected positions.
Hence, the image formation equation can be rewritten as
and are pre-specified illumination directions. Since, the objective is to generate HR gallery images which will be sufficient to account for any illumination in the probe image, we generate images under pre-specified illumination conditions and use them in the gallery. Figure 2 shows some relighted HR images along with the corresponding input and LR images. Furthermore, as the condition is true irrespective of the resolution of LR image, the same set of gallery images can be used for all resolutions.
Iii-B Low Resolution Dictionary Learning
In LR face recognition, given labeled HR training images, the objective is to identify the class of a novel probe LR face image. Suppose that we are given distinct face classes and a set of HR training images per class, . Here, corresponds to the total number of images in class including the relighted images. We identify an grayscale image as an -dimensional vector, , which can be obtained by stacking its columns, where . Let
be an matrix of training images corresponding to the class. For resolution and illumination robust recognition, the matrix is pre-multiplied by downsampling and blurring matrices. Here, has a fixed dimension of and will be of size , where , the LR probe being a grayscale image of . The resolution specific training matrix, is thus created as
Given this matrix, we seek the dictionary that provides the best representation for each elements in this matrix. One can obtain this by finding a dictionary and a sparse matrix that minimizes the following representation error
where represent the columns of and the sparsity measure counts the number of nonzero elements in the representation. Here, denotes the Frobenius norm defined as . Many approaches have been proposed in the literature for solving such optimization problem. In this paper, we adapt the K-SVD algorithm  for solving (III-B) due to its simplicity and fast convergence. The K-SVD algorithm alternates between sparse-coding and dictionary update steps. In the sparse-coding step, is fixed and the representation vectors s are found for each example . Then, with fixed a the dictionary is updated atom-by-atom in an efficient way. See  for more details on the K-SVD dictionary learning algorithm.
Classification: Given an LR probe, it is column-stacked to give the column vector . It is projected onto the span of the atoms in each of the class dictionary, using the orthogonal projector
The approximation and residual vectors can then be calculated as
is the identity matrix and
are the coefficients. Since the K-SVD algorithm finds the dictionary, , that leads to the best representation for each examples in , will be small if were to belong to the class and large for the other classes. Based on this, we can classify by assigning it to the class, , that gives the lowest reconstruction error, :
Generic Dictionary Learning: The class-specific dictionary, learnt above can be extended to use features other than intensity images. Specifically, the dictionary can be learnt using features like Eigenbasis, extracted from training matrix . However, as equation (3) does not hold for , the resolution specific feature matrix is directly extracted using . Our Synthesis-based LR FR (SLRFR) algorithm is summarized in Figure 3.
Iii-C Non-linear Dictionary Learning
The class identities in the face dataset may not be linearly separable. Hence, we also extend the SLRFR framework to the kernel space. This essentially requires the dictionary learning model to be non-liner .
Let be a non-linear mapping from dimensional space into a dot product space . A non-linear dictionary can be trained in the feature space by solving the following optimization problem
In (9) we have used the following model for the dictionary in the feature space,
Since it can be shown that the dictionary lies in the linear span of the samples , where is a matrix with atoms . This model provides adaptivity via modification of the matrix . Through some algebraic manipulations, the cost function in (9) can be rewritten as,
where is a kernel matrix whose elements are computed from
It is apparent that the objective function is feasible since it only involves a matrix of finite dimension , instead of dealing with a possibly infinite dimensional dictionary.
An important property of this formulation is that the computation of only requires dot products. Therefore, we are able to employ Mercer kernel functions to compute these dot products without carrying out the mapping . Some commonly used kernels include polynomial kernels
and Gaussian kernels
where and are the parameters.
Similar to the optimization of (III-B) using the linear K-SVD  algorithm, the optimization of (9) involves sparse coding and dictionary update steps in the feature space which results in the kernel dictionary learning algorithm . Details of the optimization can be found in  and Appendix A.
Classification: Let denote the learned dictionaries for classes. Let be a vectorized LR probe image of size . We first find coefficient vectors with at most non-zero coefficients such that approximates by minimizing the following problem
for all . The above problem can be solved by the Kernel Orthogonal Matching Pursuit (KOMP) algorithm . The reconstruction error is then computed as
where Similar to the linear case, once the residuals are found, we can classify by assigning it to the class, , that gives the lowest reconstruction error, :
Our kernel Synthesis-based LR FR (kerSLRFR) algorithm is summarized in Figure 4.
Iii-D Joint Non-linear Dictionary Learning
In the previous sections, we described methods to learn resolution-specific dictionaries for linear and non-linear cases. However, even though dictionaries can capture class-specific variations, the recognition performance would go down at low resolutions. Hence, information available in the HR training images must be exploited to make the method robust. To this, we propose a framework of learning joint dictionaries for HR and corresponding LR images. We achieve this through sharing sparse codes between HR and LR dictionaries. This regularizes the learned LR dictionary to output similar sparse codes as HR dictionary, thus, making it robust. The proposed formulation is described as follows.
Let be a non-linear mapping from dimensional space into a dot product space . We seek to learn dictionaries and by solving the optimization problem:
is a hyperparameter. This can be re-formulated as:
Classification: Let denote the learned dictionaries for classes. Then a low resolution probe can be classified using the KOMP algorithm , as described in (11), (12) and (13), by substituting for dictionary term. The proposed algorithm joint kernel SLRFR (jointKerSLRFR) is summarized in Figure 5.
To demonstrate the effectiveness of our method, in this section, we present experimental results on various face recognition datasets. We deomonstrate the effectiveness of proposed recognition framework, as well as compared with metric learning [12, 11] and super-resolution [10, 9] based methods. For all the experiments, we learnt the dictionary elements using PCA features.
Iv-a FRGC Dataset
We also evaluated on Experiment 1 of the FRGC dataset . It consists of gallery images, each subject having one gallery and probe images under controlled setting. A separate training set of images is also available which was used to learn the PCA basis.
Implementation The resolution of the HR image was fixed at and probe images at resolutions of , and were created by smoothening and downsampling the HR probe images. From each gallery image, 5 different illumination images were produced, which were flipped to give 10 images per subject. The experiments were done at resolutions of and , thus validating the method across resolutions. We also tested the CLPM algorithm  and PCA performances on the expanded gallery to get a fair comparison. We also report the recognition rate for PCA using the original gallery image to demonstrate the utility of gallery extension at low resolutions. Results from other algorithms are also tabulated. We chose RBF kernel for tesing kerSLRFR and jointKerSLRFR and set for jointKerSLRFR. The kernel parameter, was obtained through cross-validation for both HR and LR data.
Observations Figure 6 and Table I show that the proposed methods clearly outperforms previous algorithms. The proposed algorithm, SLRFR improves the CLPM algorithm for all the resolutions, while kerSLRFR further boosts the performance. The jointKerSLRFR shows the best performance for all the methods. The joint sparse coding framework, clearly helps in improving performance at low resolutions. Further, PCA using the extended gallery set also improves the performance over using a single gallery image. This shows that our method of gallery extension can be coupled with the existing face recognition algorithms to improve performance at low resolutions.
|Resolution||MDS ||S2R2 ||VLR ||SLRFR||kerSLRFR||jointKerSLRFR|
Sensitivity to noise: Low resolution images are often corrupted with noise. Thus, senstivity of noise is important in assessing performance of different algorithms. Figure 7 shows the recognition rate for different algorithms with increasing noise level. It can be seen that CLPM shows a sharp decline with increasing noise, but the proposed approaches SLRFR, kerSLRFR and jointKerSLRFR are stable with noise. This is because the CLPM algorithm learns a model tailored to noise-free low resolution images, whereas the generative approach in the proposed methods leads to stable performance with increasing noise.
Iv-B CMU-PIE dataset
The PIE dataset  consists of subjects in frontal pose and under different illumination conditions. Each subject has face images under different illumination conditions.
Implementation We chose first subjects with randomly chosen illuminations as the training set to learn PCA basis. For the remaining subjects and the illumination conditions, the experiment was done by choosing one gallery image per subject and taking the remaining as the probe image. The procedure was repeated for all the images and the final recognition rate was obtained by averaging over all the images. The size of the HR images was fixed to . The LR images were obtained by smoothening followed by downsampling the HR images. For each galley image, images under different illuminations produced using gallery extension method and the corresponding flipped images were added to the gallery set. The RBF kernel was chosen for kerSLRFR and jointKerSLRFR and the kernel parameter, was set through cross-validation.
|Resolution||MDS ||VLR* ||SLRFR||kerSLRFR||jointKerSLRFR|
Observations Figure 8, 9 and Table II show that the proposed method clearly outperforms previous algorithms. The proposed algorithms shows over improvement over PCA performance with the original gallery set at rank one recognition rate and better than the CLPM method at the lowest probe resolution. PCA using the extended gallery set also improves the performance over using a single gallery image. This shows that our method of gallery extension can be coupled with the existing face recognition algorithms to improve performance at low resolutions.
Iv-C AR Face dataset
We also tested the proposed algorithms on the AR Face dataset . TheAR face dataset consists of faces with varying illumination and expression conditions, captured
in two sessions. We evaluated our algorithms on a set of 100 users. Images from the first session, seven for each subject,were used as training and gallery and the images from the second
session, again seven per subject, were used for testing.
Implementation To test our method and compare with the existing metric learning based methods  , we chose first subjects from the first session as the training set. For the remaining subjects, the experiment was done by choosing one gallery image per subject from the first session and taking the corresponding images from session 2 as probes. The procedure was repeated for all the images in the session 1 and the final recognition rate was obtained by averaging over all the runs. The size of the HR images was fixed to . The LR images were obtained by smoothening followed by downsampling the HR images to . We also tested the CLPM algorithm  and PCA performances on the expanded gallery to get a fair comparison. Results from other algorithms are also tabulated.
Observations Figure 10 shows the CMC curve for the first ranks. Clearly, the proposed approaches outperform other methods. SLRFR gives better rank one performance than CLPM algorithm, while kerSLRFR and jointKerSLRFR further increases the recognition over all the ranks.
Iv-D Outdoor Face Dataset
We also tested our method on a challenging outdoor face dataset. The database consists of face images of individuals at different distances from camera. We chose a subset of low resolution images, which were also corrupted with blur, illumination and pose variations. high resolution, frontal and well-illuminated images were taken as the gallery set for each subject. The images were aligned using manually selected facial points. The gallery resolution was fixed at and the probe resolution at . Figure 11 shows some of the gallery images and the low quality probe images. The recognition rates for the dataset are shown in Table III. We compare our method with the Regularized Discriminant Analysis (RDA)  and CLPM . For the reg LDA comparison, we first used the PCA as a dimensionality reduction method to project the raw data onto an intermediate space, then we used the RDA to project the PCA coefficients onto a final feature space.
Observations It can be seen from the table that SLRFR outperforms other algorithms on this difficult outdoor face dataset. The kerSLRFR algorithm further improves the performance, however, the jointKerSLRFR doesn’t improve it further. This may be because this is a challenging dataset containing variations other than LR, like pose, blur, etc. The CLPM algorithm performs rather poorly on this dataset, as it is unable to learn the challenging variations in the dataset.
V Computational Efficiency
All the experiments were conducted using 2.13GHz Intel Xeon processor on Matlab programming interface. The gallery extension step using relighting took an average of per gallery image of size . The K-SVD Dictionary took on an average to train each class, while classification of a probe image was done in an average of at the resolution of . Thus, the proposed algorithm is computationally efficient . Further, as the extended gallery can be used for all resolutions, it can be computed once and stored for a database.
Vi Discussion and Conclusion
We have proposed an algorithm which can provide good accuracy for low resolution images, even when a single HR gallery image is provided per person. While the method avoids the complexity of previously proposed algorithms, it is also shown to provide state-of-the-art results when the LR probe differ in illumination from the given gallery image. The idea of exploiting information in HR gallery image is novel and can be used to extend the limits of remote face recognition. Future extensions to this work will be to extend the proposed method to account for other variations such as pose, expression, etc. The present classification using reconstruction error can be studied further to explore a mix of discriminative and reconstructive techniques to further improve the recognition.
This work was supported by a Multidisciplinary University Research Initiative grant from the Office of Naval Research under the grant N00014-08-1-0638.
-  W. Zhao, R. Chellappa, P. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Computing Surveys, vol. 35, no. 4, pp. 399–458, Dec 2003.
-  S. Shekhar, V. M. Patel, and R. Chellappa, “Synthesis-based recognition of low resolution faces,” in International Joint Conference on Biometrics, Oct 2011, pp. 1–6.
-  S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1167–1183, Sep 2002.
-  W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Computer Graphics and Applications, vol. 22, pp. 56–65, 2002.
-  H. Chang, D.-Y. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in
-  J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution as sparse representation of raw image patches,” in IEEE Conference on Computer Vision and Pattern Recognition, Jun 2008, pp. 1–8.
-  B. Gunturk, A. Batur, Y. Altunbasak, I. Hayes, M.H., and R. Mersereau, “Eigenface-domain super-resolution for face recognition,” IEEE Transactions on Image Processing, vol. 12, no. 5, pp. 597–606, May 2003.
-  K. Jia and S. Gong, “Multi-modal tensor face for simultaneous super-resolution and recognition,” in IEEE International Conference on Computer Vision, vol. 2, Oct 2005, pp. 1683–1690.
P. Hennings-Yeomans, S. Baker, and B. Kumar, “Simultaneous super-resolution and feature extraction for recognition of low-resolution faces,” inIEEE Conference on Computer Vision and Pattern Recognition, Jun 2008, pp. 1–8.
-  W. Zou and P. Yuen, “Very low resolution face recognition problem,” Image Processing, IEEE Transactions on, vol. 21, no. 1, pp. 327–340, Jan 2012.
-  B. Li, H. Chang, S. Shan, and X. Chen, “Low resolution face recognition via coupled locality preserving mappings,” in IEEE Signal Processing Letters, vol. 17, no. 1, Jan 2010, pp. 20–23.
-  S. Biswas, K. Bowyer, and P. Flynn, “Multidimensional scaling for matching low-resolution facial images,” in IEEE International Conference on Biometrics: Theory Applications and Systems, Sep 2010, pp. 1–6.
-  S. Biswas, G. Aggarwal, P. J. Flynn, and K. W. Bowyer, “Pose-robust recognition of low-resolution face images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 12, pp. 3037–3049, 2013.
-  S. Biswas, K. W. Bowyer, and P. J. Flynn, “Multidimensional scaling for matching low-resolution face images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 2019–2030, Oct 2012.
-  C.-X. Ren, D.-Q. Dai, and H. Yan, “Coupled kernel embedding for low-resolution face image recognition,” IEEE Transactions on Image Processing, vol. 21, no. 8, pp. 3770–3783, Aug 2012.
-  S. Siena, V. N. Boddeti, and B. V. Kumar, “Coupled marginal Fisher analysis for low-resolution face recognition,” in ECCV 2012: Workshops and Demonstrations. Springer, 2012, pp. 240–249.
-  Z. Lei, S. Liao, A. Jain, and S. Li, “Coupled discriminant analysis for heterogeneous face recognition,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 6, pp. 1707–1716, Dec 2012.
-  O. Arandjelovic and R. Cipolla, “Face recognition from video using the generic shape-illumination manifold,” in European Conference on Computer Vision, 2006, pp. IV: 27–40.
-  K. Hotta, T. Kurita, and T. Mishima, “Scale invariant face detection method using higher-order local autocorrelation features extracted from log-polar image,” in IEEE International Conference on Automatic Face and Gesture Recognition, Apr 1998, pp. 70–75.
-  S.-W. Lee, J. Park, and S.-W. Lee, “Low resolution face recognition based on support vector data description,” Pattern Recognition, vol. 39, no. 9, pp. 1809–1812, 2006.
-  G. Medioni, J. Choi, C.-H. Kuo, A. Choudhury, L. Zhang, and D. Fidaleo, “Non-cooperative persons identification at a distance with 3D face modeling,” in IEEE International Conference on Biometrics: Theory, Applications, and Systems, Sep 2007, pp. 1–6.
-  H. Rara, S. Elhabian, A. Ali, M. Miller, T. Starr, and A. Farag, “Distant face recognition based on sparse-stereo reconstruction,” in IEEE International Conference on Image Processin, Nov 2009, pp. 4141–4144.
-  J.-Y. Choi, Y.-M. Ro, and K. Plataniotis, “Color face recognition for degraded face images,” Systems, Man, and Cybernetics, Part B:IEEE Transactions on Cybernetics,, vol. 39, no. 5, pp. 1217–1230, Oct 2009.
-  J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, Feb 2009.
-  V. M. Patel, Y.-C. Chen, R. Chellappa, and P. J. Phillips, “Dictionaries for image and video-based face recognition,” J. Opt. Soc. Am. A, vol. 31, no. 5, pp. 1090–1103, May 2014.
-  Y. Jia, M. Salzmann, and T. Darrell, “Factorized latent spaces with structured sparsity.” in Neural Information Processing Systems, 2010, pp. 982–990.
-  V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 1063–1074, 2003.
-  M. J. Brooks and B. K. P. Horn, “Shape from shading.” Cambridge, MA: MIT Press, 1989.
-  S. Biswas, G. Aggarwal, and R. Chellappa, “Robust estimation of albedo for illumination-invariant matching and shape recovery,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 2, pp. 884–899, Mar 2009.
-  K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, pp. 684–698, 2005.
-  M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, Nov 2006.
-  H. Van Nguyen, V. Patel, N. Nasrabadi, and R. Chellappa, “Design of non-linear kernel dictionaries for object recognition,” IEEE Transactions on Image Processing, vol. 22, no. 12, pp. 5123–5135, Dec 2013.
-  P. Phillips, P. Flynn, W. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp. 947–954.
-  T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 1, pp. 1615–1618, Dec 2003.
-  A. M. Martinez and R. Benavente, “The AR face database,” Tech. Rep., 1998.
-  J. Friedman, “Regularized discriminant analysis,” Journal of the American Statistical Association, vol. 84, pp. 165–175, 1989.
Here, we will describe the kernel dictionary learning algorithm  and the framework for the proposed joint kernel dictionary learning algorithm (jointKerKSVD).
Vi-a Kernel Dictionary Learning
Hence, the optimization problem can be broken up into different sub-problems:
We can solve this using kernel orthogonal matching pursuit (KOMP). Let denote the set of selected atoms at iteration , denote the reconstruction of the signal, using the selected atoms, being the corresponding residue and the sparse code at iteration.
Start with , , .
Calculate the residue as:
Project the residue on atoms not selected and add the atom with maximum projection value to :
Update the set as:
Update the sparse code, and reconstruction, as:
; Repeat steps 2-4 times.
Dictionary update Once the sparse codes are calculated, the dictionary can be updated as:
The dictionary atoms are now normalized to unit norm in feature space:
Vi-B Joint kernel dictionary learning
The optimization problem (14) can be solved in a similar way as the kernel dictionary learning problem in two alterative steps:
Sparse Coding Here, we keep and fixed and learn the joint sparse code . The optimization problem (15) can be written as:
Thus, the optimization can be broken up into sub-problems:
This is similar to the original kernel dictionary learning formulation, with the signal replaced by . Thus, the above problem can be solved using similar procedure as KOMP. Let denote the set of selected atoms at iteration , denote the reconstruction of the signal, using the selected atoms, being the corresponding residue and the sparse code at iteration.
Start with , , .
Calculate the residue as:
Project the residue on atoms not selected and add the atom with maximum projection value to :
Update the set as:
Update the sparse code, and reconstruction, as:
; Repeat steps 2-4 times.
Dictionary update The dictionaries and can now be obtained as:
Further the dictionary atoms are normalized to unit norm in feature space: