Deep Joint Face Hallucination and Recognition

11/24/2016 ∙ by Junyu Wu, et al. ∙ Tencent QQ SUN YAT-SEN UNIVERSITY 0

Deep models have achieved impressive performance for face hallucination tasks. However, we observe that directly feeding the hallucinated facial images into recog- nition models can even degrade the recognition performance despite the much better visualization quality. In this paper, we address this problem by jointly learning a deep model for two tasks, i.e. face hallucination and recognition. In particular, we design an end-to-end deep convolution network with hallucination sub-network cascaded by recognition sub-network. The recognition sub- network are responsible for producing discriminative feature representations using the hallucinated images as inputs generated by hallucination sub-network. During training, we feed LR facial images into the network and optimize the parameters by minimizing two loss items, i.e. 1) face hallucination loss measured by the pixel wise difference between the ground truth HR images and network-generated images; and 2) verification loss which is measured by the classification error and intra-class distance. We extensively evaluate our method on LFW and YTF datasets. The experimental results show that our method can achieve recognition accuracy 97.95 testing set, outperforming the accuracy 96.35 model. And on the more challenging YTF dataset, we achieve recognition accuracy 90.65 face recognition model on the 4x down-sampled version.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Face hallucination and recognition are critical components for a lot of applications, e.g. law enforcement and video surveillance. Face hallucination aims at producing HR (high-resolution) facial images from LR (low-resolution) images [12]. Face recognition targets at verifying whether two facial images are from the same identity by designing discriminative features and similarities [21]. Empirical studies [14] in face recognition proved that a minimum face resolution between and is required for stand-alone recognition algorithms. [20] reported a significant performance drop when the image resolution is decreased below pixels. It is natural to expect that hallucinated face images can improve the recognition performance for LR facial images. Unfortunately, we find that this expectation does not hold in a lot of cases. As an example, Figure 1 shows typical LR versions of LFW [8] and its hallucinated counterparts generated by SRCNN [4]. We can clearly see that hallucinated versions have much better details and sharpness. However, feeding the hallucinated versions to a state-of-the-art recognition model can even degrade the recognition performance compared with the LR versions (from to ).

Figure 1: Images generated by SRCNN.


reported similar conclusion: SR algorithms may perform poorly on recognition task since SR algorithms focus more on visual enhancement rather than classification accuracy. Considering the SR model and recognition model are trained separately, this phenomenon is not hard to be explained as each model has no signals or feedbacks from the other one during the training. Thus we propose a novel method to jointly optimize these two models under a unified convolutional neural network. Our Joint Model is based on an end-to-end CNN which can be seen as composed of two sub-networks, i.e. hallucination sub-network followed by recognition sub-network. During testing, one LR image is fed to the end-to-end network so that the hallucination sub-network produces a hallucinated facial image (as intermediate feature maps). Then this hallucinated image is fed to the recognition sub-network to generate a representation vector for recognition. In order to jointly solve these two tasks, LR face images are provided with its HR versions as well as their identities in the training stage. With these enriched training samples, we introduce two loss items to solve the parameters, i.e. hallucination loss and recognition loss. The hallucination loss is defined as the squared difference between the generated image and ground truth HR. The recognition loss follows the recently published literature

[24] which is defined as the weighted sum of classification error and intra-class distance (the distance between each sample and its center in the feature space). Intuitively, classification error is to separate different classes as far as possible while the intra-class distance is to shrink the samples of one class.

To the best of our knowledge, there are few works studying the joint learning of hallucination and recognition for face images. The most similar work to ours is proposed by Z Wang et al [23]

. In this work, the authors first train a SR network. Then two fully-connected layers are stacked on this pretrained SR network to learn a classification model. During the learning of this classification model, the super resolution loss is not applied anymore, i.e. SR module only acts as pretraining rather than joint supervision. In contrast to this work, we focus on face domain and extensively study the joint effect of SR and recognition using state-of-the-art network architectures to rigorously evaluate the improvements brought by the Joint Model.

We extensively evaluate our method on public dataset, i.e. LFW and the YTF. We obtain a set of models for thorough comparison to demonstrate the effect of the Joint Model. Our experimental results show that the result of Joint Model outperforms the independently trained models by a margin of on LFW.

In summary, our contributions are mainly two folded:

  • A joint end-to-end model which simultaneously solve hallucination task and recognition task.

  • Extensive performance reports of hallucination and recognition performance on facial dataset.

2 Related Work

The related work to our method can be roughly divided into 3 groups as follows.

2.1 Face Recognition

The shallow models, e.g. Eigen face [21], Fisher Face [2], and Gabor based LDA [13], and LBP based LDA [11] usually rely on handcrafted features and are evaluated on early datasets in controlled environments. Recently, a set of deep face models have been proposed and greatly advanced the progress [18, 16, 28, 15]. DeepID [16] uses a set of small networks with each network observing a patch of the face region for recognition. FaceNet [15] is another deep face model proposed recently, which are trained by relative distance constraints with one large network. Using a huge dataset, FaceNet achieves 99.6% recognition rate on LFW. [25]

proposed a loss function (called center loss) to minimize the intra-class distances of the deep features, and achieved 99.2% recognition rate on LFW using web-collected training data.

2.2 Super Resolution and Face Hallucination

A category of state-of-art SR approaches [6, 3, 27]

learn a mapping between LR / HR patches. There have been some studies of using deep learning techniques for SR

[4] [10]. SRCNN [4] is a representative state-of-art method for deep learning based SR approach, which directly models HR images with 3 layers: patch extraction / representation, non-linear mapping, and reconstruction. [10] proposed a Very Deep Super-Resolution convolutional network, modeling high frequency information with a 20 weighted layers network.

Conventional hallucination methods [1, 22] are often designed for controlled settings and cannot handle varying conditions. Deep models are also applied to face hallucination tasks [31, 32]. [31] proposed a Bi-channel Convolutional Neural Network, which extracts robust face representations from raw input by using deep convolutional network, then adaptively integrates 2 channels of information to predict the HR image.

2.3 Low Resolution Face Recognition

Low-resolution face recognition (LR FR) aims to recognize faces from small size or poor quality images with varying pose, illumination, expression, etc. [33] reported a degradation of the recognition performance when face regions became smaller than . [23] proposed a Partially Coupled Super-Resolution Networks (PCSRN), as the pre-training part of recognition model.

3 Joint Model

We use one end-to-end network to jointly solve face hallucination and recognition. Figure 2 illustrates the overall principle. This network consists of two parts, i.e. face hallucination layers and recognition layers, which will be abbreviated as SRNET and FRNET respectively for convenience. In testing stage, the hallucination layers produce a high resolution facial image for a low resolution facial image . The recognition layers then generates face representations using as input which serves face recognition task. As these two parts are cascaded, these two steps will be executed by one forward propagation, i.e. in end-to-end fashion.

Figure 2: Illustration of the Joint Model.

An intuitive approach to implement this end-to-end network is cascade one well trained SRNET and one FRNET. However, as we aforementioned, such direct cascading will even degrade the overall recognition performance despite the output of the SRNET has better visualization and PSNR since the well trained FRNET has never seen samples generated by SRNET.

In order to address this problem, we propose to jointly optimize these two networks so that each network can benefit from the other one. Figure 2 illustrates the overall principle. Given a set of low resolution facial images with their high resolution versions and the labels of identities , the end-to-end model produces predicted high resolution facial images by SRNET and feature vectors by FRNET. This end-to-end network is jointly optimized so that are as close as possible to and should be able to separate different identities in the feature space. These two constraints can be further formulated as two loss items and in the overall objective function as follows where denotes the parameter set of SRNET and denotes the parameter set of FRNET with and controls the weight of these two items:


Note depends on both parameter set and as FRNET uses the outputs of SRNET as inputs. For the loss item , we use the pixel wise difference between and as below:


And for the recognition loss, we want to obtain representations that can discriminate different identities in the feature space under some similarity measure. We follow the recently published method, named center loss to model this constraint. In particular, this loss includes two items, i.e. classification error and the center loss which is defined as the mean intra-class distance between the samples and their centers. We use to denote the th column of the softmax weight matrix and for the bias terms, then can be defined as below where is the number of training samples:


By using to represent the center of class in the feature space, is then defined as:


In order to balance the softmax loss and center loss , we can introduce weight parameters , and and define the overall loss function as:


In the next section, we will give a method to solve this model in the end-to-end fashion.

3.1 Optimization

In this section, we show how to jointly solve our end-to-end model. As the softmax and center-loss are introduced, we use and to denote the softmax parameter set and center vectors of classes and give a parameterized version of the loss function to show the dependency of different items on the parameter set.


Due to the non-convexity of the loss function, we apply gradient descent algorithm to find the local minimum, i.e. calculating the gradient and update by this gradient with a learning rate iteratively. Note the update of is replaced by an approximate mechanism as adopted in literature [24] rather than the gradient method.

Graident with respect to (): This gradient is relatively simple and can be obtained by running the standard back propagation algorithm after we calculate

as the following chain rule holds:


Actually, involves two terms according to the definition as below:


The first term is rather simple according to the definition of . However, the second term is a little bit complicated as depends on class center which further depends on . In order to simplify the optimization algorithm, we follow the approach in literature [24], i.e. fixing the center during calculation of . This simplification gives us:


Gradient with respect to (): For the parameter , we give the chain rule as in equation 10 considering the hallucination loss is added to the intermediate feature map :


This shows we can run back propagation to get the gradient with respect to parameter set after we correct set the partial derivative of the loss function with respect to . And by expanding , we get:


According to the definition, is quite simple as follows:


And the remaining part, i.e. can not be analytically expressed as and is not directly defined on , however, it is just the result of back propagation of recognition layers.

Gradient with respect to This can be directly calculated according to the definition of softmax loss .

Center update : In deriving the gradient with respect to output feature , we assume the center is fixed. However, during the training, will be inevitably changed, which requires to update accordingly. We strictly follow the mechanisms adopted in the literature [24] by updating the center with a learning rate as it has been proven very effective:


where is defined as:


With these gradient, we can easily run gradient descent algorithm iteratively to find the local minimum. We summarize the optimization algorithm in Algorithm 1:

Training samples ;
Model parameter set
1:  while not converged do
2:     t=t+1;
3:     calculate the partial derivative ;
4:     update the parameter set by ;
5:     calculate the partial derivative ;
6:     execute back propagation from top layer to the bottom layer of FRNET to obtain ;
7:     calculate the partial derivative ;
8:     add the to the derivative obtained in step 6;
9:     execute back propagation from the top layer to the bottom layer of SRNET to obtain ;
10:     update the parameter W by W;
11:     calculate ;
12:     update the center by ;
13:  end while
Algorithm 1 Joint Optimization Algorithm

4 Experiments

In this section, we give the experimental results of our model. We first describe the experimental setting including the data preparation, network architecture and evaluation protocol. Then we give the performance of our models under different settings. Also, we compare performance of our SRNET with other state-of-art methods.

Layer type
Kernel size
Table 1: SRNET archicture details.
Layer type
Kernel size
Max Pooling
Max Pooling
Max Pooling
Max Pooling
Inner product - -
Table 2: FRNET archicture details.
Training images
DeepFace [19] 4M
DeepID-2+ [17] -
FaceNet [5] 200M
Center loss [24] 0.7M
Table 3: Verification performance of different methods on LFW and YTF datasets.
Training data
Testing data
3 HR Hallucinated
5 Hallucinated Hallucinated
Table 4: Accuracies and TPs in different settings.
Training data
Bibubic - 30.08
SRCNN CASIA-WebFaces 31.70
Stand-alone SRNET CASIA-WebFaces 31.70
Joint Model SRNET CASIA-WebFaces 31.71
Table 5: PSNR of different methods super-resolving LR-LFW

4.1 Experimental Setting

Data Preparation We use 3 datasets in our experiments: CASIA-WebFace [28], LFW [8], and YTF [26]. LR-CASIA, LR-LFW and LR-YTF are down-sampled versions of CASIA-WebFaces, LFW and YTF by a factor of . All the face images are aligned with 5 landmarks (two eyes, noise and mouth corners) detected with algorithm [30] for similarity transformation. The faces are cropped to RGB images. Each pixel in RGB images is normalized by subtracting 127.5 then dividing by 128. The only data augmentation we used is horizontal flipping.

Network Architecture This network consists of two parts: SRNET to hallucinate LR inputs and SRNET to extract deep discriminative features from input images. Details of SRNET and FRNET are given in Table 1 and Table 2. The notation follows [7]’s convention.

Evaluation Protocol We report our results on 3 metrics: 1) Verification accuracy on LR-LFW and LR-YTF, 2) True positive rate at low false positive rate (TP for short), and 3) Average PSNR gains on LR-LFW.

Implemenation Details

We implement the SRNET and FRNET using the Caffe

[9] library with our modifications. We extract the deep features by concatenating the output of the first fully-connected layer of the FRNET for each image and its horizontal flip. Verification task is done on the score computed by the cosine distance of two features after PCA. For fair comparisons, we train the networks with batch size . We choose a learning rate for SRNET and a learning rate for FRNET, and divide the learning rates by after and iterations. The training procedure is finished after epochs, in no more than hours on a single TITAN X GPU.

4.2 Recognition Perfomance and Comparison

One important goal of our model is to achieve better recognition performance for low resolution facial images. Thus we conduct the experiment using low resolution images for testing and compare with the methods that also use LR images as input.

Setting 1: HR-training and HR-testing In order to show the drop caused by low resolution images, we first give the recognition performance trained and tested by normal images, i.e. trained on CASIA-WebFaces and tested on LFW and YTF. For LFW testing set, the verification accuracy is , and TP is . For YTF testing set, the verification accuracy is , and TP is . We call the network trained by HR dataset as FRNET-HR. Also, we give a comparison of FRNET with other state-of-the-art models in Table 3.

Setting 2: HR-training and LR-testing The simplest way to run face recognition for low resolution image is to directly feeding the up-scaled image into a network that is trained by the normal dataset (in our experiment, trained by HR-CASIA). On LR-LFW testing set, we achieve accuracy and TP . On LR-YTF, we achieve accuracy and TP . We can see a large drop () on LFW compared with the number of using HR as inputs.

Setting 3: HR-training and Hallucinated-testing As SRNET produces hallucinated HR versions, we can also use the hallucinated images generated by the SRNET for testing. Thus we first train the SRNET using CASIA-Webfaces. By using the hallucinated versions of LR-LFW, we achieve verification accuracy on LFW, from which we can clearly see the hallucinated inputs even degrade the recognition performance compared with directly feeding the LR images to the network ().

Setting 4: LR-training and LR-testing Another direct means to support LR testing is to train the network with LR-CASIA. We call this trained network FRNET-LR. On LR-LFW testing set, we achieve accuracy and TP . On LR-YTF, we achieve accuracy and TF . FRNET-LR performs slightly better than FRNET-HR on LR versions of testing sets.

Setting 5: Hallucinated-training and Hallucinated-testing In order to directly benefit from the output of SRNET, we can train the FRNET by using the outputs of SRNET to improve the recognition performance. More precisely, we first train our SRNET and generate hallucinated version of LR-CASIA with SRNET, which are further used to train FRNET. In testing stage, we get the hallucinated versions of LR-LFW and LR-YTF and use the hallucinated versions for testing. Not surprisingly, we get accuracy on LR-LFW and on LR-YTF respectively. Surprising, It shows a improvement over previous settings on LR-LFW, and poses a negative impact to performance on LR-YTF. We believe that the performance degradation on LR-LFW is caused by video compression artifacts which prevent the SRNET from working properly, and more discriminative features can be learned from hallucinated face images to help recognition task.

Setting 6: Joint End-to-end Training and Testing In this setting, we give the recognition performance of our Joint Model. We train the network by taking LR-CASIA images as inputs and CASIA-WebFaces images as ground-truths. The weight of , and are set , and respectively. We get accuracy , TP on LR-LFW, and accuracy , TP on LR-YTF, which shows a improvement over setting 5. Results of setting 5 and setting 6 support our hypothesis that not only FRNET can learn better from hallucinated images containing more discriminative features, but also SRNET can learn how to produce images more helpful to face recognition task.

We give accuracies and TPs under all 6 settings in Table 4.

4.3 SR Performance and Comparison

Our Joint Model serves not only for face recognition purpose, but also generates visually pleasing hallucinated images. We trained a SRCNN from scratch as [4], and compare it with our models. Also, we find the Joint Model has slightly out-performanced stand-alone SRNET and SRCNN (trained on CASIA-WebFaces) by a dB.

5 Conclusion

In this paper, we have proposed a Joint Multi-tasking Model for LR face recognition and face SR. By joining the SR network to our face recognition, the power of extracting deep feature from LR is greatly enhanced. Experiments on several LR version of face benchmarks have convincingly demonstrated the effectiveness of the proposed approach.


  • [1] Simon Baker and Takeo Kanade. Hallucinating faces. In 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), 26-30 March 2000, Grenoble, France, pages 83–89, 2000.
  • [2] Peter N Belhumeur, João P Hespanha, and David J Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 19(7):711–720, 1997.
  • [3] Hong Chang, Dit-Yan Yeung, and Yimin Xiong. Super-resolution through neighbor embedding. In CVPR (1), pages 275–282, 2004.
  • [4] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell., 38(2):295–307, 2016.
  • [5] Lucie Flekova and Iryna Gurevych. Supersense embeddings: A unified model for supersense interpretation, prediction, and utilization. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers, 2016.
  • [6] William T. Freeman, Egon C. Pasztor, and Owen T. Carmichael. Learning low-level vision.

    International Journal of Computer Vision

    , 40(1):25–47, 2000.
  • [7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
  • [8] Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, October 2007.
  • [9] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
  • [10] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. CoRR, abs/1511.04587, 2015.
  • [11] Stan Z Li, Senior RuFeng Chu, ShengCai Liao, and Lun Zhang. Illumination invariant face recognition using near-infrared images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(4):627–639, 2007.
  • [12] Ce Liu, Heung-Yeung Shum, and William T. Freeman. Face hallucination: Theory and practice. International Journal of Computer Vision, 75(1):115–134, 2007.
  • [13] Chengjun Liu and Harry Wechsler. Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. Image processing, IEEE Transactions on, 11(4):467–476, 2002.
  • [14] Yui Man Lui, D. Bolme, B.A. Draper, and J. R. Beveridge. A meta-analysis of face recognition covariates. In IEEE International Conference on Biometrics: Theory, Applications, and Systems, pages 1–8, 2009.
  • [15] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In

    IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015

    , pages 815–823, 2015.
  • [16] Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation by joint identification-verification. In Advances in Neural Information Processing Systems, pages 1988–1996, 2014.
  • [17] Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deeply learned face representations are sparse, selective, and robust. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 2892–2900, 2015.
  • [18] Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lars Wolf. Deepface: Closing the gap to human-level performance in face verification. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1701–1708. IEEE, 2014.
  • [19] Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face verification. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, pages 1701–1708, 2014.
  • [20] Antonio Torralba, Robert Fergus, and William T. Freeman.

    80 million tiny images: A large data set for nonparametric object and scene recognition.

    IEEE Trans. Pattern Anal. Mach. Intell., 30(11):1958–1970, 2008.
  • [21] Matthew Turk and Alex Pentland. Eigenfaces for recognition. Journal of cognitive neuroscience, 3(1):71–86, 1991.
  • [22] Xiaogang Wang and Xiaoou Tang. Hallucinating face by eigentransformation. IEEE Trans. Systems, Man, and Cybernetics, Part C, 35(3):425–434, 2005.
  • [23] Zhangyang Wang, Shiyu Chang, Yingzhen Yang, Ding Liu, and Thomas S. Huang. Studying very low resolution recognition using deep networks. CoRR, abs/1601.04153, 2016.
  • [24] Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach for deep face recognition. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VII, pages 499–515, 2016.
  • [25] Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach for deep face recognition. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VII, pages 499–515, 2016.
  • [26] Lior Wolf, Tal Hassner, and Itay Maoz. Face recognition in unconstrained videos with matched background similarity. In The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011, pages 529–534, 2011.
  • [27] Jianchao Yang, John Wright, Thomas S. Huang, and Yi Ma. Image super-resolution as sparse representation of raw image patches. In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), 24-26 June 2008, Anchorage, Alaska, USA, 2008.
  • [28] Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. Learning face representation from scratch. arXiv preprint arXiv:1411.7923, 2014.
  • [29] Jiaqi Zhang, Zhenhua Guo, Xiu Li, and Youbin Chen. Large margin coupled mapping for low resolution face recognition. In

    PRICAI 2016: Trends in Artificial Intelligence - 14th Pacific Rim International Conference on Artificial Intelligence, Phuket, Thailand, August 22-26, 2016, Proceedings

    , pages 661–672, 2016.
  • [30] Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. Joint face detection and alignment using multi-task cascaded convolutional networks. CoRR, abs/1604.02878, 2016.
  • [31] Erjin Zhou, Haoqiang Fan, Zhimin Cao, Yuning Jiang, and Qi Yin. Learning face hallucination in the wild. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA., pages 3871–3877, 2015.
  • [32] Shizhan Zhu, Sifei Liu, Chen Change Loy, and Xiaoou Tang. Deep cascaded bi-network for face hallucination. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V, pages 614–630, 2016.
  • [33] Wilman W. W. Zou and Pong C. Yuen. Very low resolution face recognition problem. IEEE Trans. Image Processing, 21(1):327–340, 2012.