The famed portrait ’Afghan girl’ gained worldwide recognition when it was featured on the cover of National Geographic Magazine in 1985, whereas the person in the imagery remained anonymous for years until she was identified in 2002 . In thousands of similar cases of searching for long-lost persons or fugitives, there are usually no more clues than old photos. While human beings, especially forensic artists, can attempt to conceive the aging process on individuals’ faces, the output apparently depend on their expertise and state of mind. The computer-aided age progression111We use aging simulation, aging synthesis, and age progression alternately in this paper./regression technique is to aesthetically render a given face image with natural aging/rejuvenating effects. By generating an accurate likeness years prior to or after the reference photo, it facilitates finding lost individuals and suspect identification in law enforcement, and helps guarding vulnerable population against serial offenders. Furthermore, being described as ’half art and half science’, it also benefits anthropometry, biometrics, entertainment, and cosmetology. Studying face age progression is thus of great significance, and this paper focuses on this problem.
Face aging is a process that happens throughout our lives. The intrinsic particularity and complexity of physical aging, the interferences caused by other factors (e.g., PIE variations), and shortage of labeled aging data, collectively make learning face age progression a rather difficult problem. Ever since Pittenger and Shaw presented a theory of event perception to simulate the craniofacial growth in 1975, substantial efforts have been made to tackle the challenges of aging simulation, where aging accuracy and identity permanence are commonly acknowledged as the two underlying premises of its success
. Technological advancements have undergone a gradual transition from computer graphics to computer vision, with deep generative networks now dominating this community.
The pioneering studies on this issue mechanically simulated the profile growth and muscle changes w.r.t. the elapsed time, where crania development theory and skin wrinkle analysis were investigated . These methods provided the first insight into face aging synthesis; however, they heavily relied on the empirical knowledge, and generally worked in a complex manner, making them difficult to generalize. Data-driven approaches followed, where age progression was primarily carried out by applying the prototype of aging details to test faces , or by modeling the dependency between longitudinal facial changes and corresponding ages . Although obvious signs of aging were well synthesized, their aging functions cannot formulate the complex mechanism accurately enough, limiting the diversity of aging patterns.
Until quite recently, deep generative networks have exhibited a remarkable capability in image generation  and have also been utilized for age progression . While these approaches render faces with more appealing aging effects and less ghosting artifacts compared to the previous conventional approaches, the problem has not been solved. Specifically, these approaches focus more on modeling face transformation between two age groups, where the age factor plays a dominant role while the identity information plays a subordinate role, with the result that aging accuracy and identity permanence can hardly be simultaneously achieved, in particular for long-term age progression . Furthermore, they mostly require multiple face images of different ages of the same individual at the training stage, involving another intractable issue, i.e. intra-individual aging face sequence collection . Both the aforementioned facts underscore the need of improving the capability of face age progression.
In this study, we propose a novel approach to face age progression, which integrates the advantage of Generative Adversarial Networks (GAN) in synthesizing visually plausible images with prior domain knowledge in human aging. Compared with existing methods in the literature, it is more capable of handling the two critical requirements in age progression, i.e.
identity permanence and aging accuracy, delivering continuous sequences with more realistic effects. To be specific, the proposed approach uses a Convolutional Neural Network (CNN) based generator to capture target age distributions, and it separately models different face attributes depending upon their changes over time. The training critic thus incorporates the squared Euclidean loss in the image space, the GAN loss that encourages generated faces to be indistinguishable from the age progressed faces in the training set in terms of age, and the identity loss which minimizes the input-output distance by a high-level feature representation embedding personalized characteristics. It ensures that the resulting faces present desired effects of aging while the identity properties remain stable.
In contrast to the previous techniques that primarily operate on cropped facial areas (usually excluding foreheads) , we highlight that synthesis of the entire face is important since the parts of forehead and hair also significantly impact the perceived age. To achieve this and further enhance the aging details, our method leverages the intrinsic hierarchy of deep networks. A discriminator of the pyramid architecture is designed to estimate high-level age-related clues in a fine-grained way. Our approach overcomes the limitations of single age-specific representation and handles age transformation both locally and globally. As a result, more photorealistic imageries are generated (see Fig. 1 for an illustration of aging results).
Additionally, through an extended GAN structure, consisting of a single generator and multiple parallel discriminators, we can render the input face image to any arbitrary age label and produce a continuous face aging sequence, which tends to support more generalized application scenarios. As the data density of each individual age cluster is jointly considered, the proposed approach does not demand face pairs across two age domains or entire aging sequences of the same person in the training phase as the majority of the counterparts do, thus alleviating the problem of large data collection.
More concisely, this study makes the following contributions:
A novel GAN based method for age progression, which incorporates face verification and age estimation techniques, thereby addressing the issues of aging effect generation and identity cue preservation in a coupled manner;
A pyramid-structured discriminator for GAN-based face synthesis, which well simulates both global muscle sagging and local subtle wrinkles;
An adversarial learning scheme to simultaneously train a single generator and multiple parallel discriminators, which is able to generate smooth continuous aging sequences even if only faces from discrete age clusters are provided;
New validation experiments in addition to existing protocols, including COTS face recognition system based evaluation and robustness assessment to the changes in expression, pose, and makeup.
A preliminary version of this paper was published in . This paper significantly improves  in the following aspects. (i) We extend the model to incorporate the conditional age information; the new model iteratively learns age transformations for diverse target age groups, which simplifies the complex training procedure that requires individual training sessions for different target age groups in . (ii) We extend the model to progressive aging simulation covering any arbitrary age; whereas  merely approximates the age distributions of given face sets. (iii) Both face age progression and regression results are refined, along with more extensive evaluations and more comprehensive discussions.
The rest of this paper is organized as follows. Section 2 reviews related work on face age progression. Section 3 details the proposed GAN based aging simulation method. Section 4 displays and analyzes the experimental results on three databases, followed by Section 5 concluding this paper with perspectives.
|Illumination-aware Prototyping ||40K images||FG-NET||Subjective preference votes are higher than that of prior work for aging young children on 120 aged face pairs.||Age related high-frequency details are smoothed out during computing the templated aging mask.|
|Hidden Face Analysis joint Sparse Representation ||IRIP (2,100 images)||FG-NET, MORPH, IRIP||Perceived ages of the synthetic faces increase along with target ages; Rank-1 recognition rates on 20 random selected subjectsa exceed 70% for the target age cluster of [31-40] on the listed databases.||Aging patterns are linearly modeled.|
|Recurrent Face Aging ||4,371 male images, 6,264 female images||FG-NET||EER is lower in cross-age face verification on 916 synthetic pairsb than on original pairsc ; subjective preference votes are higher (58.67%) than that of prior work (30.92%) on 246 aged face pairs.||Aging sequences are required for training the network; testing is inflexible.|
|Conditional Adversarial Autoencoder (CAAE) ||10,670 images||FG-NET||48.38% age progressed faces can be verified in human based evaluation on 856 synthetic pairsb ; subjective preference votes (52.77%) are higher than that of prior work (28.99%) on 235 aged face pairs.||Aging details are inadequate due to the insufficient representation ability of the adversarial discriminator.|
|Temporal Non-Volume Preserving (TNVP) ||AGFW(18,685 images), 6,437 aging sequences||FG-NET, MORPH||On FG-NET, TAR at 0.01% FAR is 47.72% in age invariant face verification including 1M distractors.||Without identity consistency.|
The raw test faces in the Gallery set; their corresponding age-progressed faces in the Probe set.
Consisting of an age-progressed face image and a ground-truth image.
Consisting of a raw test face image and a ground-truth image.
2 Related Work
The published studies on face age progression can be primarily summarized into: (i) empirical knowledge based models, (ii) conventional statistical learning based models, and (iii) deep generative models. In the following, we briefly review these approaches in terms of algorithm, database, and evaluation metrics.
2.1 On Algorithm
I. Empirical knowledge based models. Such methods were exploited in the initial explorations of face age progression to simulate the aging mechanisms of cranium and facial muscles. Todd et al.  introduced a revised cardioidal-strain transformation where head growth was modeled in a computable geometric procedure. Based on skin’s anatomical structure, Wu et al.  proposed a 3-layered dynamic skin model to simulate wrinkles. Mechanical aging methods were also incorporated by Ramanathan and Chellappa  and Suo et al. . Although promising results are reached, it is not so straightforward to generalize these models, as they highly depend on specialized rules and operate in a sophisticated way.
II. Conventional statistical learning based models. The aging patterns were basically learned from the training faces covering a wide range of ages. Kemelmacher-Shlizerman et al.  presented a prototype based method, and they further took the illumination factor into account. Wang et al. 
built the mapping between corresponding down-sampled and high-resolution faces in a tensor space, and aging details were added on the latter. Yanget al.  first settled the multi-attribute decomposition problem, and progression was achieved by transforming only the age component to a target age group. These methods indeed improve the results; however, the aging prototypes or functions cannot accurately fit the aging process, leading to lack of aging diversity. Meanwhile, ghosting artifacts frequently appear in the synthesized faces.
III. Deep generative models.
These methods encode facial variations in terms of age by hierarchically learned deep features for simulation. In, Wang et al. transformed faces across different ages smoothly by modeling the intermediate transition states in an RNN model. But multiple face images of various ages of each subject were required at the training stage, and the exact age label of the probe face was needed during test, thus greatly limiting its flexibility. Under the framework of conditional adversarial autoencoder (CAAE) , facial muscle sagging caused by aging was simulated, whereas only rough wrinkles were rendered mainly due to the insufficient representation ability of the discriminator. With the Temporal Non-Volume Preserving (TNVP) aging approach , the short-term age progression was accomplished by mapping the data densities of two consecutive age groups with ResNet blocks 
, and the long-term aging synthesis was finally reached by a chaining of short-term stages. Its major weakness, however, was that it merely considered the probability distribution of a set of faces without any individuality information. As a result, the synthesized faces in a complete aging sequence varied a lot in color, expression, and even identity.
2.2 On Data
Two databases have been widely used in face age progression, namely FG-NET  and MORPH , and they have greatly facilitated progress in the community. Since FG-NET only contains 1,002 face images from 82 subjects, it is commonly used in the test phase only . The MORPH mugshot database is relatively large, which consists of more than 50K images from over 13K subjects; however, the intensive image acquisition time and limited number of images per subject (the average time lapse per subject is only 1.62 years and the average number of images per subject is 4.03) does not make MORPH suitable for approaches requiring the long-term aging sequence from an individual. As a result, some studies, e.g., , make use of their private databases or combine the existing ones for training. Besides these databases, Cross-Age Celebrity Dataset (CACD) also incorporates longitudinal aging variations, containing 163,446 images ’in the wild’ from 2,000 celebrities, which can be further exploited for modeling face aging.
2.3 On Evaluation Metrics
Early research validated the proposed methods by comparative visualizations between a few example face images and their corresponding age progressed results. While visual inspection is useful, it is subjective and not specific. The two criteria for aging model evaluation, i.e. accuracy of aging and preservation of identity, were then proposed by Lanitis  and refined by Suo et al. . They were not only intuitively reasonable, but also quantitatively feasible. Therefore, in the subsequent studies , face recognition/verification on the generated faces was often conducted for performance assessment. To the best of our knowledge, however, such evaluations were mostly performed on a small number of faces, and the data (images or subjects) used were often arbitrary and varied from one study to another. Furthermore, evaluations on the perceived/estimated age222Perceived age: the individual age gauged by human subjects from the visual appearance. Estimated age: The individual age recognized by machine from the visual appearance. of the simulated faces have received limited attention.
Table 1 presents a summary of the recent representative studies. Despite this progress, these approaches either cannot fully address simulation accuracy or identity permanence, or neither; and the evaluation metrics have some limitations. Apart from these issues, generating rich texture still remains a challenge in many scenarios of face synthesis, especially for the specific task of age progression, where visually convincing wrinkles and age-related details are extremely essential to accurate perceived age and authenticity.
Our study makes use of the ability of image generation by GAN and presents a different but effective method, where the age-related GAN loss is adopted for aging modeling, the individual-dependent critic is used to keep the identity cue stable, and a multi-pathway discriminator architecture is further applied to refine aging detail generation. This solution is more powerful in dealing with the core issues of age progression, i.e. age accuracy and identity preservation. Meanwhile, it is able to produce continuous face aging sequences without any strong assumptions on training data. Additionally, it shows the robustness to expression, pose, and makeup variations.
A classic GAN contains a generator and a discriminator , which are iteratively trained via an adversarial process. The generative function tries to capture the underlying data density and confuse the discriminative function , while the optimization procedure of aims to achieve the distinguishability and distinguish the natural face images from the fake ones generated by . Both and can be approximated by neural networks, e.g.
, Multi-Layer Perceptron (MLP). The risk function of optimizing this minimax two-player game can be written as:
is a noise sample from a prior probability distribution, and denotes a real face image following a certain distribution . On convergence, the distribution of the synthesized images is equivalent to .
Recently, the conditional GANs (cGANs) have been actively studied, where the generative model approximates the dependency of the pre-images (or controlled attributes) and their corresponding targets. cGANs have shown promising results in video prediction , text to image synthesis 35], etc. In our case, a CNN based generator takes the younger face image and the target age label (or a target age group) as inputs, and synthesizes an elder face image conditioned on them.
In order to accurately achieve the aging effects while simultaneously maintaining the person-specific information, a compound training critic is exploited in the offline phase, which incorporates the traditional squared Euclidean loss in the image space, the GAN loss that encourages generated faces to be indistinguishable from the training elderly faces in terms of age, and the identity loss minimizing the input-output distance in a high-level feature representation embedding the personalized characteristics. Note, in adversarial training, a pyramid-structured discriminator is specially designed to refine facial detail generation. The classic GAN model is extended, where one single generator is utilized along with a specific number of parallel discriminators, in order to flexibly steer the age transformation to diverse target age labels. The adversarial training scheme in our method does not only contribute to the real-fake level classification, but also guides the model to converge towards target age distributions. See Fig. 2 for an overview, and we detail the method in the subsequent sections.
The proposed model enables one-step aging simulation. Synthesizing age progressed faces only requires a forward pass through the generator . The generative network is a cascade of encoder and decoder. With the input young face and the target age label (or age range) , it first exploits multiple stacked convolutional layers to encode them to a latent space, capturing the facial properties that tend to be stable w.r.t. the elapsed time, followed by four residual blocks  modeling the common structure shared by the input and output faces, similar to the settings in 
. Age transformation to a target image space is finally achieved by three fractionally-strided convolutional layers, yielding the age progression resultconditioned on the inputs,
. Rather than using the max-pooling and upsampling layers to calculate the feature maps, the
convolution kernels with a stride of 2 are employed here, ensuring that every pixel contributes and the adjacent pixels transform in a synergistic manner. All the convolutional layers are followed by Instance Normalization and ReLU non-linearity activation. Paddings are added to the layers to make the input and output have exactly the same size. A total variation regularization layer is stacked to the end ofto reduce the spike artifacts. The detailed generator architecture is shown in Section 4.2.
3.3 Adversarial Learning
To ensure the generated face images present proper aging effects, we adopt the adversarial learning mechanism. We first introduce how it is exploited to achieve age transformation to a domain corresponding to a specific target age cluster and then illustrate how this method is generalized to simulate the progressive aging procedure.
3.3.1 Aging Modeling
The system critic incorporates the prior knowledge of the data density of the faces from the target age cluster , and a discriminative network is thus introduced, which outputs a scalar representing the probability that comes from the data, . We denote the distribution of young faces as and the distribution of the generated faces as . is supposed to be equivalent to the distribution of the target age cluster when age transformation is learned. Assuming that we follow the classic GANs 
, where a binary cross entropy classifier is used, then the learning process amounts to minimizing the following loss defined overand :
It is always desirable that and converge coherently; however, frequently achieves the distinguishability faster in practice and feeds back vanishing gradients for to learn, since the JS divergence is locally saturated. As analyzed in some recent studies, i.e. the Wasserstein GAN , the Least Squares GAN , and the Loss-Sensitive GAN , the most fundamental issue lies in how exactly the distance between sequences of probability distributions is defined. Here, we use the least squares loss substituting for the negative log likelihood objective, which penalizes the samples depending on how close they are to the decision boundary in a metric space, minimizing the Pearson divergence. To achieve more evident and vivid age-specific facial details, both the actual young faces and the generated age-progressed faces are fed into as negative samples while the true elderly face of age range as positive ones. Accordingly, the training process alternately minimizes the following:
where indicates the least squares distance. Note, in (3) and (4), a function bridges and , which is especially introduced to extract age-related features conveyed by faces, as shown in Fig. 2. Considering that human faces of diverse age groups share a common configuration and similar texture properties, a feature extractor is thus exploited independently of , which outputs high-level feature representations to make the generated faces more distinguishable from the true elderly faces in terms of age. In particular, is pre-trained for a multi-label classification task of age estimation with the VGG-16 structure , and after convergence, we remove the fully connected layers and integrate it into the framework. Further, since natural images exhibit multi-scale characteristics and along the hierarchical architecture, captures the properties gradually from exact pixel values to high-level age-specific semantic information, this study leverages the intrinsic pyramid hierarchy. The pyramid facial feature representations are jointly estimated by at multiple scales, handling aging effect generation in a fine-grained way.
The outputs of the 2nd, 4th, 7th and 10th convolutional layers of are used. They pass through the pathways of and finally result in a concatenated 12 3 representation. The ’label’ applied here is thus a tensor of the same size rather than a single scalar, fulfilled with ones (for the positive face samples) or zeros (for the negative face samples). The least squares loss is minimized using the full feature representation to jointly estimate the 4 pathways, as illustrated in Fig. 3. In
, all convolutional layers are followed by Batch Normalization and LeakyReLU activation except the last one. The detailed discriminator architecture is shown inSection 4.2.
3.3.2 Progressive Aging Modeling
As face aging is a dynamic long-term procedure, we attempt to synthesize progressive changes w.r.t the elapsed time and generate complete aging sequences. Under the GAN framework, a common practice is to leverage the age labels, add an auxiliary classifier (on top of or parallel to ), and impose the age classification loss when optimizing both and . The adversarial part of the original aging model shown in Fig. 4 (a) is thus extended to that in Fig. 4 (b). Such a variant has indeed been shown to be effective in handling the data with high variability and improving the quality of generated samples . Forcing the proposed method to perform additional age classification, however, does not boost the performance of the core task in this study, i.e. aging effect synthesis, and this claim is supported both mathematically and experimentally. To be specific, an AC-GAN can be viewed as a GAN model with a hierarchical classifier , and the objective is formulated as:
where denotes the auxiliary classifier, indicates the cross-entropy, and with if ; otherwise . The hierarchical connection of adversarial training and classification inevitably brings the issue that the former is actually missing when optimizing the latter. As for our task of face age progression, only works at the real-fake level, but does not control the aging degree. As shown in Fig. 5, with the faces in the first row as inputs, we obtain the corresponding age-progressed faces in the second row. While the age-related facial changes, e.g. wrinkles around the eyes, indeed emerge, they are not as natural as the real faces, which confirms the above analysis.
As the adversarial constraint is the key to guarantee the convergence , the non-hierarchical structure as shown in Fig. 4 (c) can be further considered, where the discriminator outputs logits indicating the age range that belongs to. Theoretically, such adversarial training covers all age clusters; however, using a single discriminator is inadequate to accurately model the complex distribution of multiple age domains, and the networks are more likely to suffer mode collapse. See the examples of aging photos shown in Fig. 5(c). The resulting images can still be recognized as faces, but they are mainly credited to the pixel-level constraint and identity preservation loss that will be illustrated in the next subsection.
To sufficiently leverage the intrinsic domain transfer ability of GANs, different from original adversarial learning and its two major variants (as in Figs. 4 (a), (b), and (c), respectively), we replace the discriminator in (4) with class-wise discriminators, as shown in Fig. 4 (d). Assigning each specific target age cluster a unique discriminator , the objectives of the extended model can be finally formulated as:
The discriminators synergistically guide the generator to learn age transformations to the domains associated with label . The simulation results are displayed in Fig. 5 (d), in which the aging effects are more evident and closer to the natural images.
Although the faces are manually divided into a number of discrete clusters along the timeline, the data are intrinsically connected since face aging is an accumulation of changes over time. Based on the latent manifold assumption of images, we further make an inference that aging procedure is a smooth transformation lying in a manifold, and adding sufficient local constraints on the key points in the temporal aspect probably enables the method to achieve globally longitudinal aging. In this case, the method not only accurately models the age distributions that independently presented to the discriminators at the training period, but also successfully simulates continuous aging sequences covering any age label at the test phase.
3.4 Identity Preservation
One core issue of face age progression is keeping person-dependent properties stable. Therefore, we incorporate the associated constraint by measuring the input-output distance in a proper feature space, which is sensitive to the identity change while relatively robust to other variations. Specifically, the network of deep face descriptor  is utilized, denoted as
, to encode the personalized information and further define the identity loss function.is trained with a large face dataset containing millions of face images from thousands of individuals333The face images are collected via the Google Image Search using the names of 5K celebrities, purified by automatic and manual filtering.. It is originally bootstrapped by recognizing unique individuals; and the last classification layer is then removed and is tuned to improve the capability of verification in the Euclidean space using a triplet-loss training scheme. In our case, is clipped to 10 convolutional layers, and the identity loss is formulated as:
where is the squared Euclidean distance between feature representations. For more implementation details of deep face descriptor, please refer to .
Besides the specially designed age-related GAN critic and the identity permanence penalty, a pixel-wise L2 loss in the image space is also adopted for further bridging the input-output gap, e.g., the color aberration, which is formulated as:
where , , and correspond to the image shape. Meanwhile, we make use of the total variation regularizer loss encouraging the spatial smoothness, by stacking a TV regularization layer to the end of as in . Finally, the system training loss can be written as:
We train the generator and the discriminators alternately until optimality, and finally learns the desired age transformation and becomes a reliable estimator.
|Database||Number of images||Number of subjects||
|MORPH ||52,099||12,938||1 - 53 (avg. 4.03)||0 - 33 (avg. 1.62)||16 - 77||33.07|
|CACD ||163,446||2,000||22 - 139 (avg. 81.72)||7 - 9 (avg. 8.99)||14 - 62||38.03|
|FG-NET ||1,002||82||6 - 18 (avg. 12.22)||11 - 54 (avg. 27.80)||0 - 69||15.84|
4 Experimental Results
To validate the proposed age progression approach, we carry out extensive experiments and make fair comparison to the state of the art counterparts. The face databases, implementation details, and synthesized results are presented in the subsequent.
An extension of the MORPH aging database contains 52,099 color images with near-frontal pose, neutral expression, and uniform illumination (minor pose and expression variations sometimes occur). The subject age ranges from 16 to 77 years old, with the average age being approximately 33. The longitudinal age span of a subject varies from 46 days to 33 years. CACD is a public dataset  collected via the Google Image Search, containing 163,446 face images of 2,000 celebrities across 10 years, with age ranging from 14 to 62 years old. The dataset has the largest number of images with age changes, showing variations in pose, illumination, expression, etc., with less controlled acquisition than MORPH. We mainly use MORPH and CACD for training and validation. FG-NET is also a very popular database for evaluation of face aging methods. As its images are inadequate to train the proposed deep model, we only adopt it for testing to make comparison with prior work. More properties of these databases are given in Table 2 and Figure 6.
4.2 Implementation Details
Allowing for the fact that the number of faces older than 60 years old is quite limited in both training databases of MORPH and CACD, and neither contains images of children, we only perform adult aging. We follow the time span of 10 years for each age cluster as reported in many previous studies , and apply age progression on the faces below 30 years old, synthesizing a sequence of age-progressed renderings when they are in their 30s, 40s, and 50s. Prior to feeding the images into the networks, the faces are aligned using the eye locations provided by the datasets themselves (FG-NET, CACD) or detected by the online face analysis API of Face++  (MORPH). Excluding those images undetected in MORPH and that of children in FG-NET, 489, 163,446, and 51,699 imageries from the three datasets are finally adopted, respectively. A face image is cropped to 224 224 pixels; concatenating the conditional age, they form a tensor representation of as the generator input, where denotes the number of target age clusters and it is set to 3 in our experiments.
The architectures of the networks and are shown in Tables 3 and 4. For both training datasets, the trade-off parameters , , , and are empirically set to 0.10, 1000.00, 0.005, and , respectively. At the training stage, we use Adam with the learning rate of and the weight decay factor of for every iterations. We (i) update the three discriminators alternatively at every iteration, (ii) use the age-related and identity-related critics at every generator iteration, and (iii) employ the pixel-level critic for every 15 generator iterations. The networks are trained with a batch size of 4 for iterations in total, which takes around 25 hours on a GTX 1080Ti GPU.
We comprehensively evaluate the proposed age progression method in the following layers: (I) face aging simulation; (II-A) visual fidelity analysis; objective evaluations on (II-B) accuracy of aging and (II-C) preservation of identity; (II-D) ablation study; and (II-E) comparison with state-of-the-art.
|Conv.||9 9||1||4||32 w h|
|Conv.||3 3||2||1||64 w/2 h/2|
|Conv.||3 3||2||1||128 w/4 h/4|
|Res.||3 3||1||2||128 w/4 h/4|
|Res.||3 3||1||2||128 w/4 h/4|
|Res.||3 3||1||2||128 w/4 h/4|
|Res.||3 3||1||2||128 w/4 h/4|
|De-conv.||3 3||2||1||64 w/2 w/2|
|De-conv.||3 3||2||1||32 w h|
|De-conv.||9 9||1||4||3 w h|
|conv - 1||conv - 1||conv - 1||conv - 1|
Layers are denoted as: conv - output;
kernel = 4, stride = 2, padding = 1
4.3.1 Experiment I: Aging Effect Simulation
Experiment I-A: Discrete Age Progression. Five-fold cross validation is conducted to simulate aged faces. On CACD, each fold contains 400 individuals with nearly 10,079, 8,635, 7,964, and 6,011 face images from the four age clusters of [14-30], [31-40], [41-50], and [51-60] years, respectively; while on MORPH, each fold consists of around 2,586 subjects with 4,467, 3,030, 2,205, and 639 faces from the four age groups. For each run, four folds are utilized for training, and the remainder for evaluation. Examples of age progression results achieved on the two databases are depicted in Figs. 7 and 8. Additionally, cross-dataset validation is conducted on the faces older than 14 years old on FG-NET, with CACD as the training set, and the simulation results are shown in Fig. 9. As we can see, although the examples cover a wide range of population in terms of race, gender, pose, makeup and expression, visually plausible and convincing aging effects are achieved.
Apart from face age progression, the proposed method can be applied for age regression as well. All the test faces in this experiment come from the people older than 30 years old, and they are transformed to the age bracket of below 30 years old. Under such settings, only one discriminator is exploited at the training stage. Example rejuvenating visualizations are shown in Fig. 10. As expected, this operation tightens the face skin, and the hair becomes thick and luxuriant.
Experiment I-B: Continuous Age Progression. Recall that the proposed aging method can not only generate faces at specific age range presented to the network during training, but also fill up the intermediate transitional states, producing very smooth aging sequences. It indicates that the generator does not simply remember the aging templates, but internally learns meaningful face representation in a latent space, thus able to understand the connections between age clusters.
To be specific, the well-trained aging models in Experiment I-A are directly used for testing in this experiment. The conditional age is first denoted as a tensor of size , and the ith channel is set to ones while the others are zeros when the target is the ith age cluster. To bridge the gap between discrete age clusters and obtain continuous aged renderings, the item values in the tensor gradually fall and rise within the interval [0,1] at the testing phase, ensuring that the conditional age label smoothly shifts from an existing one to another.
Some representative examples of continuous aging sequences are shown in Figure 11. The images in each panel are sorted by the increasing conditional age. The leftmost image is the input; and the second, the fifth, and the rightmost images are the results conditioned on the existent age labels, while others are interpolated results. As shown in the figure, all the generated images are natural and with high quality, clearly highlighting the ability of knowledge transfer. The aging changes between neighboring images are inconspicuous, while they are convincing throughout the complete aging sequence. Even if only a specific number of discriminators are used and the given age distributions are independently presented to the network during training, the method still successfully and flexibly steers age transformation to any arbitrary age label, making it more useful in the real world.
4.3.2 Experiment II: Aging Model Evaluation
We acknowledge that face age progression is supposed to aesthetically predict the future appearance of the individual, beyond aging accuracy and identity preservation, therefore in this experiment a more comprehensive evaluation of the age progression results is provided with both the visual and quantitative analysis.
Experiment II-A: Visual Fidelity. Fig. 12 (a) displays example face images with glasses, occlusions, and pose variations. The age-progressed faces are still photorealistic and true to the original inputs; whereas the previous prototyping based methods  are inherently inadequate for such circumstances, and the parametric aging models  may lead to ghosting artifacts. In Fig. 12 (b), some examples of hair aging are demonstrated. As far as we know, almost all aging approaches proposed in the literature  focus on cropped faces without considering hair aging, mainly because hair is not as structured as the face area. Further, hair is diverse in texture, shape, and color, thus difficult to model. Nevertheless, the proposed method takes the whole face as input, and the hair grows wispy and thin in aging simulation. Fig. 12 (c) confirms the capability of preserving the necessary facial details during aging, e.g., the skin pigmentation, and Fig. 12 (d) shows the smoothness and consistency of the aging changes, where the lips become thinner, the under-eye bags become more obvious, and wrinkles are deeper.
Experiment II-B: Aging Accuracy. Along with face aging, the estimated age is supposed to increase. Correspondingly, objective age estimation is conducted to measure the aging accuracy. We apply the online face analysis tool of Face++444All the evaluation results from Face++ were obtained on Nov. 2018. The system is updated at irregular intervals.  to every synthesized face on CACD and MORPH. Excluding those undetected, the age-progressed faces of test samples in the CACD dataset are investigated (average of 10,030 test faces in each run under 5-fold cross validation). Table 5 shows the results, where the mean values are 40.52, 48.03, and 54.05 years old for the 3 designated age clusters, respectively. Ideally, they would be observed in the age range of [31-40], [41-50], and [51-60]. Admittedly, the lifestyle factors may accelerate or slow down the aging rates for the individuals and makeups would also influence the appearances, leading to deviations of the estimated age from the actual age, but the overall trend is relatively robust. Due to such intrinsic ambiguities, objective age estimation is further conducted on all the true face images in the dataset as benchmark. In Table 5 and Figs. 13 (a) and 13 (c), it can be seen that the estimated ages of the synthesized faces are well matched with those of the real images, and the estimated ages increase steadily with the elapsed time, clearly validating our method.
On MORPH, the aging synthesis results of faces below 30 years old are used in this evaluation (average of 4,464 test faces in each run), and it shows similar results, which confirms that the proposed method has indeed captured the data density of the given subset of faces in terms of age. See Table 5 and Figs. 13 (b) and 13 (d) for detailed results.
Experiment II-C: Identity Preservation. Objective face verification with Face++ is carried out to quantitatively determine if the original identity property is well preserved during age progression. For each test face, we perform comparisons between the input image and the corresponding aging simulation results: [test face, aged face 1], [test face, aged face 2], and [test face, aged face 3]; and statistical analyses among the synthesized faces are conducted, i.e. [aged face 1, aged face 2], [aged face 1, aged face 3], and [aged face 2, aged face 3]. Similar to experiment II-B, 50,148 young faces in CACD and their age-progressed renderings are used in this evaluation, leading to a total of verifications. As shown in Table 6, the obtained verification rates for the three age-progressed clusters are 100 0 %, 100 0 %, and 99.98 0.02 %, respectively. For MORPH, there are verifications in total, and the verification rates under the five-fold cross validation scheme are 100 0 %, 100 0 %, and 99.88 0.07 %, respectively. Compared to our previous work on face age progression where the aging models are independently trained for each target age cluster555We re-evaluate the aging results with the latest version of Face++. On CACD, the mean verification rates for the three age clusters are 99.99%, 99.95%, and 99.11%, respectively; and for MORPH, they are 100%, 99.44%, and 95.94%, respectively., our work makes remarkable progress for preserving the identity information. It highlights the reliability of the proposed method and validates the necessity of smoothing the transitional states. Additionally, in Table 6 and Fig. 14, face verification confidence decreases as the time elapsed between two images increases, which conforms to the physical effect of face aging . It may also explain the better performance achieved on CACD in this evaluation, where the maximum mean age gap between the input and synthesized age cluster is 23.09 years, far less than that of 28.61 years achieved on MORPH.
Experiment II-D: Contribution of Pyramid Architecture. One model assumption is that the pyramid structure of the discriminator advances the generation of the aging effects, making the age-progressed faces more natural. Accordingly, we conduct ablation study and carry out comparison to the one-pathway discriminator, under which the generated faces are directly fed into the estimator rather than represented as feature pyramid first. The discriminator architecture in the contrast experiment is equivalent to a chaining of the network and the first pathway in the proposed pyramid .
Fig. 15 provides a demonstration. Visually, the synthesized aging details of the counterpart are not so evident, and the proposed method behaves better in the relatively complex situation, e.g. rejuvenating the white beard. To make the comparison more specific and reliable, quantitative evaluations are further conducted with the same settings as in experiments II-B and II-C, and the statistical results are shown in Table 7. In the table, the estimated ages achieved on CACD and MORPH are generally higher than the benchmark (except for the 1st age cluster for CACD), and the mean absolute errors over the three age clusters are 2.48 and 2.67 years for the two databases, respectively, exhibiting larger deviations than 0.99 and 0.85 years obtained by using the pyramid architecture. The reason lies in that the synthesized wrinkles in this contrast experiment are not so clear and the faces look relatively messy. It also explains the decreased face verification confidence observed in Table 7 in the identity preservation evaluation. Based on both the visual and quantitative analysis, we can draw an inference that compared with the pyramid architecture, the one-pathway discriminator, as widely utilized in previous GAN-based frameworks, is lagging behind in regard to modeling the sophisticated aging changes.
|Age Cluster 0||Age Cluster 1||Age Cluster 2||Age Cluster 3||Age Cluster 0||Age Cluster 1||Age Cluster 2||Age Cluster 3|
|Synthesized faces*||Synthesized faces*|
|–||40.52 9.08||48.03 9.32||54.05 9.94||–||39.62 7.29||48.09 7.01||56.54 6.74|
|–||40.52 0.16||48.03 0.32||54.05 0.23||–||39.62 0.84||48.09 1.07||56.54 1.19|
|Natural faces||Natural faces|
|30.96 8.50||38.92 9.73||46.95 10.70||53.75 12.45||27.93 6.16||38.87 7.52||48.03 8.32||58.29 8.76|
The standard deviation in the first row is calculated on all the synthesized faces; the standard deviation in the second row is calculated on the mean values of the 5 folds.
|Aged face 1||Aged face 2||Aged face 3||Aged face 1||Aged face 2||Aged face 3|
|verification confidencea||verification confidencea|
|Test face||(a)||94.61 0.07||93.13 0.24||91.22 0.25||(b)||94.65 0.11||92.46 0.23||88.12 0.46|
|Aged face 1||–||96.71 0.02||95.45 0.06||–||96.59 0.04||94.18 0.11|
|Aged face 2||–||–||96.71 0.03||–||–||96.20 0.06|
|verification confidence b||verification confidenceb|
|Test face||94.61 1.00||93.13 1.68||91.22 2.55||94.65 0.95||92.46 1.87||88.12 3.30|
|Aged face 1||–||96.71 0.29||95.45 0.79||–||96.59 0.27||94.18 1.14|
|Aged face 2||–||–||96.71 0.26||–||–||96.20 0.40|
|verification rate (threshold = 73.98, FAR = 1e - 5)||verification rate (threshold = 73.98, FAR = 1e - 5)|
|Test face||100 0 %||100 0 %||99.98 0.02 %||100 0 %||100 0 %||99.88 0.07 %|
The standard deviation is calculated on the mean values of the 5 folds.
The standard deviation is calculated on all the synthesized faces.
Experiment II-E: Comparison to Prior Work. To compare with prior work, we conduct testing on the FG-NET and MORPH databases. These prior studies are , which signify the state-of-the-art. In addition, one of the most popular mobile aging applications, i.e. Agingbooth , and the online aging tool Face of the future  are also compared. Fig. 16 displays some example faces. As it can be seen, Face of the future and Agingbooth follow the prototyping-based method, where the identical aging mask is indiscriminately applied to all the given faces as most of the aging Apps do. While the concept of such methods is straightforward, the age-progressed faces are not photorealistic. Regarding the published works in the literature, ghosting artifacts are unavoidable for the parametric method  and the dictionary reconstruction based solutions . Technological advancements can be observed in the deep generative models , whereas they only focus on the cropped facial area, and the age-progressed faces lack necessary aging details. In a further experiment, we collect 138 paired images of 54 individuals from the published papers, and invite 10 human observers to evaluate which age-progressed face is better in the pairwise comparison. Among the 1,380 votes, 71.74% prefer the proposed method, 19.28% favor the prior work, and 8.98% indicate that they are about the same. Besides, the proposed method does not require burdensome preprocessing as previous works do, and it only needs 2 landmarks for pupil alignment. To sum up, we can say that the proposed method outperforms the counterparts.
|Aged face 1||Aged face 2||Aged face 3||Aged face 1||Aged face 2||Aged face 3|
|Estimated age (yrs old)||(a)||36.74 9.49||48.95 8.71||57.01 10.11||(b)||42.32 8.21||51.45 8.02||59.43 7.75|
|Verification confidence||95.51 0.70||91.52 2.47||87.43 4.27||94.00 1.10||91.13 2.05||86.94 3.43|
This study presents an effective solution to aging accuracy and identity preservation, and proposes a novel GAN based method. It exploits a compound training critic that integrates the simple pixel-level penalty, the age-related GAN loss achieving age transformation, and the individual-dependent critic keeping the identity information stable. For generating detailed signs of aging, a pyramidal architecture of discriminator is designed to estimate high-level face representations in a finer way. An adversarial learning scheme is further presented, to simultaneously train a single generator and multiple parallel discriminators, enabling the model to generate smooth continuous face aging sequences. Extensive experiments are conducted on three datasets, and the proposed method is shown to be effective in generating diverse face samples. Quantitative evaluations from a COTS face recognition system show that the target age distributions are accurately recovered; and 99.88% and 99.98% age progressed faces can be correctly verified at FAR after age transformations of approximately 28.61 years elapsed time on MORPH and 23.09 years on CACD.
The proposed approach achieves more accurate, more reliable, and more photorealistic aging effects than the state of the art. But, it indeed has some limitations. On the one hand, we primarily consider the general aging procedure and the facial properties that are inextricably bound to identity. There are actually additional covariates of interest that could largely influence face aging while cannot be taken into account, e.g. health condition, living style, and working environment, mainly due to the inaccessibility of such information. On the other hand, a solution to adult aging is provided, and child growth is given less attention. A shortage of publicly available longitudinal face dataset of children  is partly responsible for this, while another reason lies in that the identity feature of the younger individuals are less stable. As remarked by the recent findings on human face recognition conducted by NIST, children are not easy to recognize, which might make aged renderings questionable. Both the above-mentioned unsolved issues could be the major directions in the future work.
-  “A life revealed.” http://www.nationalgeographic.com/magazine/2002/04/afghan-girl-revealed/.
-  H. Heafner, “Age-progression technology and its application to law enforcement,” Proc. SPIE 2645, 24th AIPR Workshop on Tools and Techniques for Modeling and Simulation, Feb. 1996.
-  J.B. Pittenger and R.E. Shaw, “Aging Faces as Viscal-Elastic Events: Implications for a Theory of Nonrigid Shape Perception,” J. Experimental Psychology: Human Perception and Performance, vol. 1, no. 4, pp. 374-382, 1975.
-  J. T. Todd, L. S. Mark, R. E. Shaw, and J. B. Pittenger, “The perception of human growth,” Scientific American, vol. 242, no. 2, pp. 132-144, 1980.
-  Y. Wu, N. M. Thalmann, and D. Thalmann, “A Plastic-Visco-Elastic Model for Wrinkles in Facial Animation and Skin Aging,” Proc. Second Pacific Conf. Fundamentals of Computer Graphics, pp. 201-213, 1994.
J. Wang, Y. Shang, G. Su, and X. Lin, “Age simulation for face recognition,”
Proc. IEEE International Conference on Pattern Recognition, vol. 3, pp. 913-916, Aug. 2006.
-  N. Ramanathan and R. Chellappa, “Modeling shape and textural variations in aging faces,” Proc. IEEE International Conference on Automatic Face and Gesture Recognition, 2008, pp. 1-8.
-  A. Lanitis, “Evaluating the performance of face-aging algorithms,” in Proc. Int. Conf. Autom. Face Gesture Recog., 2008, pp. 1-6.
-  A. Lanitis, “Comparative Evaluation of Automatic Age-Progression Methodologies,” EURASIP J. Advances in Signal Processing, vol. 8, no. 2, pp. 1-10, Jan. 2008.
-  Y. Fu, G. Guo, and T. S. Huang, “Age synthesis and estimation via faces: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 11, pp. 1955-1976, Nov. 2010.
-  U. Park, Y. Tong, and A. K. Jain, “Age-invariant face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 5, pp. 947-954, May 2010.
-  J. Suo, S. Zhu, S. Shan, and X. Chen, “A compositional and dynamic model for face aging,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 3, pp. 285-401, March. 2010.
-  J. Suo, X. Chen, S. Shan, W. Gao and Q. Dai, “A Concatenational Graph Evolution Aging Model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2083-2096, Nov. 2012.
-  Y. Wang, Z. Zhang, W. Li, and F. Jiang, “Combining Tensor Space Analysis and Active Appearance Models for Aging Effect Simulation on Face Images,” IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics, vol. 42, no. 4, pp. 1107-1118, August 2012.
-  I. Kemelmacher-Shlizerman, S. Suwajanakorn, and S. M. Seitz, “Illumination-aware Age Progression,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 3334-3341, 2014.
-  H. Yang, D. Huang, Y. Wang, H. Wang, and Y. Tang, “Face aging effect simulation using hidden factor analysis joint sparse representation,” IEEE Transactions on Image Processing, vol. 25, no. 6, pp. 2493-2507, Jun. 2016.
-  X. Shu, J. Tang, H. Lai, L. Liu, andS. Yan, “Personalized age progression with aging dictionary,” Proc. IEEE International Conference on Computer Vision, pp. 3970-3978, Dec. 2015.
-  H. Yang, D. Huang, Y. Wang, and A. K. Jain, “Learning Face Age Progression: A Pyramid Architecture of GANs,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 31-39, 2018.
C. N. Duong, K. Luu, K. G. Quach, and T. D. Bui, “Longitudinal face modeling via temporal deep restricted boltzmann machines,”Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 5772-5780, Jun. 2016.
-  W. Wang, Z. Cui, Y. Yan, J. Feng, S. Yan, X. Shu, and N. Sebe, “Recurrent face aging,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 2378-2386, Jun. 2016.
-  Z. Zhang, Y. Song, and H. Qi, “Ageprogression/regression by conditional adversarial autoencoder,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 5810-5818, Jul. 2017.
-  C. N. Duong, K. G. Quach, K. Luu, N. Le, and M. Savvides, “Temporal non-volume preserving approach to facial age-progression and age-invariant face recognition,” Proc. IEEE International Conference on Computer Vision, pp. 3755-3763, Oct. 2017.
-  S. Liu, Y. Sun, W. Wang, R. Bao, D. Zhu, and S. Yan, “Face aging with contextural generative adversarial nets,” Proc. ACM Multimedia, pp. 82-90, Oct. 2017.
-  The FG-NET aging database. http://www.fgnet.rsunit.com/.
-  K. Ricanek, and T. Tesafaye, “Morph: A Longitudinal Image Database of Normal Adult Age-Progression,” Proc. IEEE International Conf. Automatic Face and Gesture Recognition, pp. 341-345, 2006.
-  B.-C. Chen, C.-S. Chen, and W. H. Hsu, “Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset,” IEEE Transactions on Multimedia, vol. 17, no. 6, pp. 804-815, 2015.
L. Best-Rowden and A. K. Jain, “Longitudinal study of automatic face recognition,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 1, pp. 148-162, Jan. 2018.
-  D. Deb, L. Best-Rowden, and A. K. Jain, “Face Recognition Performance Under Aging,” in CVPR, Workshop on Biometrics, 2017.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Proc. Advances in Neural Information Processing Systems, pp. 2672-2680, Dec. 2014.
-  M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” arXiv preprint arXiv:1701.07875, 2017.
-  G. Qi, “Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities,” arXiv preprint arXiv:1701.06264, 2017.
-  X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley, “Least squares generative adversarial networks,” Proc. IEEE International Conference on Computer Vision, pp. 2794-2802, Oct. 2017.
S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” in
Proc. International Conference on Machine Learning, pp. 1060-1069, Jun. 2016.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” Proc. Advances in Neural Information Processing Systems, pp. 2234-2242, 2016.
P. Isola, J. Zhu, T. Zhou, and A. Efros, “Image-to-Image translation with conditional adversarial networks,”Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 1125-1134, Jul. 2017.
-  Y. Taigman, A. Polyak, and L. Wolf, “Unsupervised cross-domain image generation,” arXiv preprint arXiv:1611.02200, 2016.
-  A. Dosovitskiy and T. Brox, “Generating images with perceptual similarity metrics based on deep networks,” Proc. Advances in Neural Information Processing Systems, pp. 658-666, Dec. 2016.
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,”European Conference on Computer Vision, pp. 694-711, Oct. 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 770-778, Jun. 2016.
-  O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” The British Machine Vision Conference , vol. 1, pp. 6, Sept. 2015.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” Proc. IEEE International Conference on Computer Vision, pp. 2223-2232, Oct. 2017.
-  M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440, 2015.
-  A. Odena, C. Olah, and J. Shlens, “Conditional Image Synthesis with Auxiliary Classifier GANs,” Proc. International Conference on Machine Learning, pp. 2642-2651, 2017.
-  Z. Zhou, H. Cai, S. Rong, Y. Song, K. Ren, J, Wang, W. Zhang, and Y. Yu, “Activation Maximization Generative Adversarial Nets,” Proc. International Conference on Learning Representations, Apr. 2018.
-  J. B. Tenenbaum, V. D. Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, 290(5500): 2319-2323, 2000.
-  Face++ Research Toolkit. Megvii Inc. www.faceplusplus.com.
-  AgingBooth. PiVi & Co. https://itunes.apple.com/us/app/agingbooth/id357467791?mt=8.
-  Face of the future. Computer Science Dept. at Aberystwyth University. http://cherry.dcs.aber.ac.uk/Transformer/index.html.
-  L. Best-Rowden, Y. Hoole, and A. K. Jain, “Automatic Recognition of Newborns, Infants, and Toddlers: A Longitudinal Evaluation,” Proc. of the 15th International Conf. of the Biometrics Special Interest Group (BIOSIG), Darmstadt, September 2016.
-  P. Grother, and M. Ngan, “FRVT: Performance of face identification algorithms,” NIST Interagency Report 8009, May 2014.