Deep Face Quality Assessment

by   Vishal Agarwal, et al.
iit guwahati

Face image quality is an important factor in facial recognition systems as its verification and recognition accuracy is highly dependent on the quality of image presented. Rejecting low quality images can significantly increase the accuracy of any facial recognition system. In this project, a simple approach is presented to train a deep convolutional neural network to perform end-to-end face image quality assessment. The work is done in 2 stages : First, generation of quality score label and secondly, training a deep convolutional neural network in a supervised manner to predict quality score between 0 and 1. The generation of quality labels is done by comparing the face image with a template of best quality images and then evaluating the normalized score based on the similarity.



There are no comments yet.


page 4

page 5


Deep Tiny Network for Recognition-Oriented Face Image Quality Assessment

Face recognition has made significant progress in recent years due to de...

Convolutional Neural Network for Blur Images Detection as an Alternative for Laplacian Method

With the prevalence of digital cameras, the number of digital images inc...

Speech Fusion to Face: Bridging the Gap Between Human's Vocal Characteristics and Facial Imaging

While deep learning technologies are now capable of generating realistic...

A Deep Convolutional Network for Seismic Shot-Gather Image Quality Classification

Deep Learning-based models such as Convolutional Neural Networks, have l...

Towards Robust Classification with Image Quality Assessment

Recent studies have shown that deep convolutional neural networks (DCNN)...

Quality Classified Image Analysis with Application to Face Detection and Recognition

Motion blur, out of focus, insufficient spatial resolution, lossy compre...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Human face is a very dynamic biometric system as compared to other biometric systems such as fingerprint which is largely static. The performance of a facial recognition system highly depends upon the quality of the image that it acquires. The utility of a face image to a facial image recognition can be defined by its image qualityrowden . Low quality images tend to make any facial recognition to perform worse. The various factors that results in false recognition are variations in pose, illumination, occlusion, expression, age, lifestyle, etc.lit Under controlled acquisition environment such as uniform lighting, frontal facial pose, neutral expression, high resolution image, etc. the facial recognition system can achieve very low False Acceptance Rate (FAR) in comparison to images taken in the wildint2 ; int1 . Before processing the face image for verification or recognition, we can do a quality-check assessment as a pre-processing step. Depending upon the score, the system may decide to reject low quality images and only process certain qualified images for verification or recognition. A critical application for the assessment is Negative Identification Systems such as security checks at banks or airports where suspects try to provide low quality image to evade recognitionrowden . In such cases, the system should flag such users and access should be provided only after providing perfect aligned facial image.

The primary goal of this project is to develop an end-to-end system for automatic facial quality assessment. Instead of using hand-engineered feature designed approach, a data-driven, transfer learning approach is implemented in this work. The overview of the approach is as follows. First, a database of quality score is generated as similarity score by comparing facial images with a gallery of images using Google’s FaceNet embeddings

facenet . To predict the desired quality score, a pre-trained FaceNet model with Inception v3 architecture and custom added layers is used which outputs value between 0 and 1. This model is trained using the generated quality labelled facial data in a supervised setting.

2 Previous work

There are generally two ways that exists in literature to come up with a quality metric : one is Full-Reference based and the other is No-Reference based. Based on the above classification, there are mainly two approaches that is used by most people. First, using some facial textures and properties such as resolution, pose, illumination, etc., design hand-engineered features and functions to predict an absolute quality indexwork1 . Secondly, we can compare the image under consideration to a standard reference image and use some comparison technique to get our desired quality metricwork2 ; work3 . However, the effectiveness of these methods are limited by the applicability of artificially defined facial properties and empirically selected reference image and may not generalize well to different databases or face images with multiple quality factors present.

Figure 1: Inception v3 architecture

3 Methodology

The project can be divided into 2 stages : First, prepare the quality score data by computing the similarity score from a face matching technique. Secondly, using the data generated, end-to-end training of a model which predicts facial image quality score. This trained model will be used for an automatic No Reference face image quality assessment. To get better performance in the task, a transfer learning setup is used by using the pre-trained FaceNet weights111

. Transfer Learning is a popular method in deep learning where models trained for some task are reused in some closely related tasks

tf . Using pre-trained models boost performance as it provides a good starting point and achieve better convergence.

3.1 Quality score label generation

The face image data labelled by subjects is divided into two sets consisting of template and probe images for all subjects. The template contains exactly one image per subject which serves as our reference for face similarity matching and should be of best quality. The probe contains rest of the images, except those chosen as template, labelled by the subjects. These serve as our gallery for comparison against template images and generate target quality score.

The target quality score computed from similarity score from a face matcher is highly correlated with the automatic recognition performance and serves as an ’oracle’ for quality measurerowden . The FaceNet embedding proposed by Schroff et al.facenet has been used to generate face embeddings in an Euclidean space. The simple Euclidean distance between the embeddings gives us the similarity measure between two faces. For each subject, the template image is compared with the probe images by evaluating the comparison score which is the distance between two embeddings. The comparison score is assigned to the probe image under the assumption that the quality of template is as good as a probe image and probe consists of low quality images. As defined in meth1 ; meth2 ; meth3 , the target quality value is a measure of separation between sample’s genuine score and its impostor distribution when it is compared with a gallery of template images. Therefore, a normalized comparison score for the jth probe image of ith subject is given by


3.2 End-to-End Model

FaceNet is a simple model which learns mapping from facial image into an Euclidean space. The idea behind the model is faces of same person should lie closer and faces of distinct person should be far away. Based on this principle, they proposed a triplet loss function which helps to learn robust embeddings. The FaceNet model model simplifies facial recognition systems related tasks such as face verification can be done by simply thresholding distance between the embeddings, face recognition can be done by k-Nearest Neighbour and face clustering can be done using k-means clustering.


The pre-trained FaceNet model based on Inception v3 architecture is used for the task. Since the model outputs a 128-dimensional embedding vector, a 1-dimensional custom layer is added at the end. A sigmoid activation function is used at the last layer so that we get an output value between 0 and 1. The normalized comparison score data serves as a target quality value for training this model. This model is used for end-to-end face quality assessment. The quality score obtained can be used in facial recognition systems to set a threshold according to sensitivity of application and minimal quality requirement for verification.

Figure 2: t-SNE of FaceNet embeddings

4 Experiment

4.1 Dataset

Labeled Faces in the Wild or LFW databaselfw has been used for this project. The dataset contains 13233 images of 5749 subjects. There are 1680 subjects with two or more images so one of the images for these 1680 subjects are placed in template gallery and rest 7484 images are used as probe images. The remaining 4069 subjects have just a single image each so these images have been used to extend template gallery. Further, the quality score is generated for 7484 probe images and these are used for training the model in a supervised setting. The model is evaluated on the Georgia Tech Face Databasegtdb which contains 50 subjects with 15 images per subject.

Figure 3: Face quality assessment score on a sample subject

4.2 Setup

The model takes 160x160 size RGB color images as input. The images are aligned, cropped and resized using AlignDlib222

to meet the required size. The probe data with labelled quality score is divided into 70-30 training and test data split. The loss function criterion used for the task is log mean squared error. For optimization, Stochastic Gradient Descent (SGD) is used with the learning rate of 0.001, momentum of 0.99, and weight decay of

. A mini-batch size of 64 is used and the training is iterated over 30 epochs. The experiment was done using NVIDIA GTX 1080 GPU with 8GB memory.


4.3 Results

The model is evaluated on Georgia Tech Face Database containing 50 subjects with 15 images each. Fig 2 shows 2-dimensional t-Distributed Stochastic Gradient Embedding (t-SNE) visualization of the 128-dimensional FaceNet embeddings for seven subjects. Each colour label corresponds to a subject and it can be seen that facial images of same subjects are well clustered together and different clusters are well separable and distinguisable. The metrics used to evaluate the model are False Acceptance Rate (FAR) and False Rejection Rate (FRR).

The set of false accepts (FA) or false positives can be defined as


The set of true rejects (TR) or true negative can be defined as


Therefore, False Acceptance Rate (FAR) and False Rejection Rate (FRR) is defined as


The predicted score for different poses of a subject is shown in Fig. 3. The metric False Acceptance Rate and False Rejection Rate was calculated and shown in Fig. 3(a). As seen from the plot, an Equal Error Rate of 23% is achieved on the Georgia Tech Face Database.

(a) Plot of FAR and FAR
(b) Inter and Intra class distribution
Figure 4: Some statistics
Figure 5: Score for few subjects

5 Conclusion

Face image quality assessment can be used as a pre-processing step in any facial recognition system. This type of assessment is crucial in Negative Identification Systems such as banks, airports, etc. where criminals try to escape their recognition and present low quality image. By simple thresholding of the quality score, face verification can be restricted. In this work, a simple end-to-end approach has been proposed to evaluate face image quality presented to a facial recognition system. The model used is based on Inception v3 architecture and is trained in a supervised way using the quality score labels generated. Also, the pre-trained weights from FaceNet has been used as an initialization for the model to achieve better performance. Since the face quality score labels are not readily available, these labels were generated as normalized similarity score by comparison of face images using FaceNet embeddings.

The model is evaluated using the metric False Acceptance Rate (FAR) and False Rejection Rate (FRR), the plot for which is shown in Fig. 3(a). In spite of training with such low volume of data, the model performs pretty well on the task. An Equal Error Rate of 23% is achieved on the Georgia Tech Face Database which contains images of people from multiple ethnic groups and backgrounds. Deep learning models are data hungry and require to be trained with big datasets to generalize better. If we train the model with more volume of data, we can achieve better performance and lower EER.


  • [1] Georgia tech face database,, 1999.
  • [2] Lacey Best-Rowden and Anil K. Jain. Automatic face image quality prediction. CoRR, abs/1706.09887, 2017.
  • [3] Haoqiang Fan, Zhimin Cao, Yuning Jiang, Qi Yin, and Chinchilla Doudou. Learning deep face representation. CoRR, abs/1403.2802, 2014.
  • [4] P. Grother and E. Tabassi. Performance of biometric quality measures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4):531–543, April 2007.
  • [5] Patrick J Grother, George W Quinn, and P Jonathon Phillips. Report on the evaluation of 2d still-image face recognition algorithms. NIST interagency report, 7709:106, 2010.
  • [6] Gary B Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces in’Real-Life’Images: detection, alignment, and recognition, 2008.
  • [7] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , June 2015.
  • [8] H. Sellahewa and S. A. Jassim. Image-quality-based adaptive face recognition. IEEE Transactions on Instrumentation and Measurement, 59(4):805–813, April 2010.
  • [9] E Tabassi and C.L. Wilson. A novel approach to fingerprint image quality. 2:II – 37, 10 2005.
  • [10] P. Viola and M. Jones. Robust real-time face detection. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, volume 2, pages 747–747, July 2001.
  • [11] Yongkang Wong, Shaokang Chen, Sandra Mau, Conrad Sanderson, and Brian C Lovell. Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on, pages 74–81. IEEE, 2011.
  • [12] Zhiguang Yang, Haizhou Ai, Bo Wu, Shihong Lao, and Lianhong Cai.

    Face pose estimation and its application in video shot selection.

    1:322– 325 Vol.1, 09 2004.
  • [13] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? CoRR, abs/1411.1792, 2014.
  • [14] W. Zhao, Rama Chellapa, P.J. Phillips, and A. Rosenfeld. Face recognition: A literature survey. ACM Comput. Surv., 35:399 – 458, 2003.