Look Across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition

09/02/2018 ∙ by Jian Zhao, et al. ∙ Panasonic Corporation of North America National University of Singapore Nanyang Technological University 0

Despite the remarkable progress in face recognition related technologies, reliably recognizing faces across ages still remains a big challenge. The appearance of a human face changes substantially over time, resulting in significant intra-class variations. As opposed to current techniques for age-invariant face recognition, which either directly extract age-invariant features for recognition, or first synthesize a face that matches target age before feature extraction, we argue that it is more desirable to perform both tasks jointly so that they can leverage each other. To this end, we propose a deep Age-Invariant Model (AIM) for face recognition in the wild with three distinct novelties. First, AIM presents a novel unified deep architecture jointly performing cross-age face synthesis and recognition in a mutual boosting way. Second, AIM achieves continuous face rejuvenation/aging with remarkable photorealistic and identity-preserving properties, avoiding the requirement of paired data and the true age of testing samples. Third, we develop effective and novel training strategies for end-to-end learning the whole deep architecture, which generates powerful age-invariant face representations explicitly disentangled from the age variation. Moreover, we propose a new large-scale Cross-Age Face Recognition (CAFR) benchmark dataset to facilitate existing efforts and push the frontiers of age-invariant face recognition research. Extensive experiments on both our CAFR and several other cross-age datasets (MORPH, CACD and FG-NET) demonstrate the superiority of the proposed AIM model over the state-of-the-arts. Benchmarking our model on one of the most popular unconstrained face recognition datasets IJB-C additionally verifies the promising generalizability of AIM in recognizing faces in the wild.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition. Col. 1 & 8: Input faces of distinct identities with various challenging factors (e.g., neutral, illumination, expression, pose and occlusion). Col. 2 & 7: Synthesized younger faces by our proposed AIM. Col. 3 & 6: Synthesized older faces by our proposed AIM. Col. 4 & 5: Learned facial representations by our proposed AIM, which are explicitly disentangled from the age variation. AIM can learn age-invariant representations and synthesize photorealistic cross-age faces effectively. Best viewed in color.

Face recognition is one of the most widely studied topics in computer vision and artificial intelligence fields. Recently, some approaches claim to have achieved 

[42, 8, 21, 56] or even surpassed [37, 43, 55] human performance on several benchmarks.

Despite the exciting progress, age variations still form a major bottleneck for many practical applications. For example, in law enforcement scenarios, finding missing children after years, identifying wanted fugitives based on mug shots and verifying passports usually involve recognizing faces across ages and/or synthesizing photorealistic age regressed/progressed111Face regression (a.k.a face rejuvenation) and face progression (a.k.a face aging) refers to rendering the natural rejuvenation/aging effect for a given face, respectively. face images. These are extremely challenging due to several reasons: 1) Human face rejuvenation/aging is a complex process whose patterns differ from one individual to another. Both intrinsic factors (like heredity, gender and ethnicity) and extrinsic factors (like environment and living styles) affect the aging process and lead to significant intra-class variations. 2) Facial shapes and textures dramatically change over time, making learning age-invariant patterns difficult. 3) Current learning based cross-age face recognition models are limited by existing cross-age databases [1, 35, 6, 34, 28, 54] due to their small size, narrow elapse per subject and unbalanced genders, ethnicities and age span. As such, the performance of most face recognition models degrades by over from general recognition on faces of (almost) the same age to cross-age face recognition [6]. In this work, we aim to improve automatic models for recognizing unconstrained faces with large age variations.

According to recent studies [13, 48], face images of different individuals usually share common aging characteristics (e.g.

, wrinkles), and face images of the same individual contain intrinsic features that are relatively stable across ages. Facial representations of a person in the latent space can hence be decomposed into an age-specific component which reflects the aging effect and an identity-specific component which preserves intrinsic identity information. The latter would be invariant to age variations and ideal for cross-age face recognition when achievable. This finding inspires us to develop a novel and unified deep neural network, termed as

Age Invariant Model (AIM). The AIM jointly learns disentangled identity representations that are invariant to age, and photorealistic cross-age face image synthesis that can highlight important latent representations among the disentangled ones end-to-end. Thus they mutually boost each other to achieve age-invariant face recognition. AIM takes as input face images of arbitrary ages with other potential distracting factors like various illumination, expressions, poses and occlusion. It outputs facial representations invariant to age variations and meanwhile preserves discriminativeness across different identities. As shown in Fig. 1, the AIM can learn age-invariant representations and effectively synthesize natural age regressed/progressed faces.

In particular, AIM extends from an auto-encoder based Generative Adversarial Network (GAN) and includes a disentangled Representation Learning sub-Net (RLN) and a Face Synthesis sub-N

et (FSN) for age-invariant face recognition. RLN consists of an encoder and a discriminator that compete with each other to learn discriminative and age-invariant representations. It introduces cross-age domain adversarial training to promote encoded features that are indistinguishable w.r.t. the shift between multi-age domains, and cross-entropy regularization with a label smoothing strategy to constrain cross-age representations with ambiguous separability. The discriminator incorporates dual agents to encourage the representations to be uniformly distributed to smooth the age transformation while preserving identity information. The representations are then concatenated with a continuous age condition code to synthesize age regressed/progressed face images, such that the learned representations are explicitly disentangled from age variations. FSN consists of a decoder and a local-patch based discriminator that compete with each other to synthesize photorealistic cross-age face images. FSN uses an attention mechanism to guarantee robustness to large background complexity and illumination variance. The discriminator incorporates dual agents to add realism to synthesized cross-age faces while forcing the generated faces to exhibit desirable rejuvenation/aging effects.

Moreover, we propose a new large-scale Cross-Age Face Recognition (CAFR) benchmark dataset to facilitate existing efforts and future research on age-invariant face recognition. CAFR contains 1,446,500 face images from 25,000 subjects annotated with age, identity, gender, race and landmark labels. Extensive experiments on both our CAFR and other standard cross-age datasets (MORPH [34], CACD [6] and FG-NET [1]) demonstrate the superiority of AIM over the state-of-the-arts. Benchmarking AIM on one of the most popular unconstrained face recognition datasets IJB-C [27] additionally verifies its promising generalizability in recognizing faces in the wild. Our code and trained models are available at https://github.com/ZhaoJ9014/High_Performance_Face_Recognition/tree/master/src/Look%20Across%20Elapse-%20Disentangled%20Representation%20Learning%20and%20Photorealistic%20Cross-Age%20Face%20Synthesis%20for%20Age-Invariant%20Face%20Recognition.TensorFlow. Our dataset and online demo will be released soon.

Our contributions are summarized as follows.

  • We propose a novel deep architecture unifying cross-age face synthesis and recognition in a mutual boosting way.

  • We develop effective end-to-end training strategies for the whole deep architecture to generate powerful age-invariant facial representations explicitly disentangled from the age variations.

  • The proposed model achieves continuous face rejuvenation/aging with remarkable photorealistic and identity-preserving properties, avoiding the requirement of paired data and true age of testing samples.

  • We propose a new large-scale benchmark dataset CAFR to advance the frontiers of age-invariant face recognition research.

2 Related Work

2.1 Age-Invariant Representation Learning

Conventional approaches often leverage robust local descriptors [31, 13, 40, 14, 22] and metric learning [47, 25, 7] to tackle age variance. For instance, [31]

develop a Bayesian classifier to recognize age difference and perform face verification across age progression.

[13] propose Hidden Factor Analysis (HFA) for age-invariant face recognition that separates aging variations from identity-specific features. [47] improve the performance by distance metric learning. [25] propose Gradient Orientation P

yramid (GOP) for cross-age face verification. In contrast, deep learning models often handle age variance through using a single age-agnostic or several age-specific models with pooling and specific loss functions 

[48, 57, 52, 24, 45]. For instance, [9] propose an enforced softmax optimization strategy to learn effective and compact deep facial representations with reduced intra-class variance and enlarged inter-class distance. [48] propose a Latent Factor guided Convolutional Neural N

etwork (LF-CNN) model to learn age-invariant deep features.

[57] propose an Age Estimation guided CNN (AE-CNN) model to separate aging variations from identity-specific features. [45] propose an Orthogonal Embedding CNN (OE-CNN) model to decompose deep facial representations into two orthogonal components to represent age- and identity-specific features.

2.2 Cross-Age Face Synthesis

Previous methods can be roughly divided into physical modeling based and prototype based. The former approaches model the biological patterns and physical mechanisms of aging, including muscles [41], wrinkles [33], and facial structure [32]. However, they usually require massive annotated cross-age face data with long elapse per subject which are hard to collect, and they are computationally expensive. Prototype-based approaches [3, 18] often divide faces into groups by ages and select the average face of each group as the prototype. The differences in prototypes between two age groups are then considered as the aging pattern. However, the aged face generated from the averaged prototype may lose personality information. Most of subsequent approaches [46, 53] are data-driven and do not rely much on the biological prior knowledge, and the aging patterns are learned from training data. Though improve the results, these methods suffer ghosting artifacts on the synthesized faces. More recently, deep generative networks are exploited. For instance, [44] propose a smooth face aging process between neighboring groups by modeling the intermediate transition states with Recurrent Neural Network (RNN). [54] propose a Conditional Adversarial Auto-Encoder (CAAE) and achieve face age regression/progression in a holistic framework. [58] propose a Conditional Multi-Adversarial Auto-Encoder with Ordinal Regression (CMAAE-OR) to predict facial rejuvenation and aging. [39] propose a Dual conditional GANs (Dual cGANs) where the primal cGAN transforms a face image to other ages based on the age condition, while the dual one learns to invert the task.

Our model differs from them in following aspects: 1) AIM jointly performs cross-age face synthesis and recognition end-to-end to allow them to mutually boost each other for addressing large age variance in unconstrained face recognition. 2) AIM achieves continuous face rejuvenation/aging with remarkable photorealistic and identity-preserving properties, and do not require paired data and true age of testing samples. 3) AIM generates powerful age-invariant face representations explicitly disentangled from age variations through cross-age domain adversarial training and cross-entropy regularization with a label smoothing strategy.

3 Age-Invariant Model

Figure 2: Age-Invariant Model (AIM) for face recognition in the wild. AIM extends from an auto-encoder based GAN and includes a disentangled Representation Learning sub-Net (RLN) and a Face Synthesis sub-Net (FSN) that jointly learn end-to-end. RLN consists of an encoder () and a discriminator () that compete with each other to learn discriminative and robust facial representations () disentangled from age variance. It is augmented by cross-age domain adversarial training () and cross-entropy regularization with a label smoothing strategy (). FSN consists of a decoder () and a local-patch based discriminator () that compete with each other to achieve continuous face rejuvenation/aging () with remarkable photorealistic and identity-preserving properties. It introduces an attention mechanism to guarantee robustness to large background complexity and illumination variance. Note AIM does not require paired training data nor true age of testing samples. Best viewed in color.

As shown in Fig. 2, the proposed Age-Invariant Model (AIM) extends from an auto-encoder based GAN, and consists of a disentangled Representation Learning sub-Net (RLN) and a Face Synthesis sub-Net (FSN) that jointly learn discriminative and robust facial representations disentangled from age variance and perform attention-based face rejuvenation/aging end-to-end. We now detail each component.

3.1 Disentangled Representation Learning

Matching face images across ages is demanded in many real-world applications. It is mainly challenged by variations of an individual at different ages (i.e. large intra-class variations) or caused by aging (e.g. facial shape and texture changes), and inevitable entanglement of unrelated (statistically independent) components in the deep features extracted from a general-purpose face recognition model. Large intra-class variations usually result in erroneous cross-age face recognition and entangled facial representations potentially weaken the model’s robustness in recognizing faces with age variations. We propose a GAN-like Representation Learning sub-Net (RLN) to learn discriminative and robust identity-specific facial representations disentangled from age variance, as illustrated in Fig. 2.

In particular, RLN takes the encoder (with learnable parameters ) as the generator : for facial representation learning, where , , and denote the input image height, width, channel number and the dimensionality of the encoded feature , respectively. preserves the high-level identity-specific information of the input face image through several carefully designed regularizations. We further concatenate with a continuous age condition code to synthesize age regressed/progressed face images, such that the learned representations are explicitly disentangled from age variations.

Formally, denote the input RGB face image as and the learned facial representation as . Then

(1)

The key requirements for include three aspects. 1) The learned representation should be invariant to age variations and also well preserve the identity-specific component. 2) It should be barely possible for an algorithm to identify the domain of origin of the observation regardless of the underlying gap between multi-age domains.  3) should obey uniform distribution to smooth the age transformation.

To this end, we propose to learn by minimizing the following composite losses:

(2)

where is the cross-age domain adversarial loss for facilitating age-invariant representation learning via domain adaption, is the cross-entropy regularization loss for constraining cross-age representations with ambiguous separability, is the adversarial loss for imposing the uniform distribution on , is the identity preserving loss for preserving identity information, is the adversarial loss for adding realism to the synthesized images and alleviating artifacts, is the age estimation loss for forcing the synthesized faces to exhibit desirable rejuvenation/aging effect, is the manifold consistency loss for encouraging input-output space manifold consistency, is the total variation loss for reducing spiky artifacts, is the attention loss for facilitating robustness enhancement via an attention mechanism, and are weighting parameters among different losses.

In order to enhance the age-invariant representation learning capacity, we adopt to promote emergence of features encoded by that are indistinguishable w.r.t. the shift between multi-age domains, which is defined as

(3)

where denotes the learnable parameters for the domain classifier, and indicates which domain is from. Minimizing can reduce the domain discrepancy and help the generator achieve similar facial representations across different age domains, even if training samples from a domain are limited. Such adapted representations are provided by augmenting the encoder of with a few standard layers as the domain classifier , and a new gradient reversal layer to reverse the gradient during optimizing the encoder (i.e., gradient reverse operator as in Fig. 2), as inspired by [12].

If using alone, the results tend to be sub-optimal, because searching for a local minimum of may go through a path that resides outside the manifold of desired cross-age representations with ambiguous separability. Thus, we combine with to ensure the search resides in that manifold and produces age-invariant facial representations, where is defined as

(4)

where denotes the learnable parameters for the regularizer, and denotes the smoothed domain indicator.

is introduced to impose a prior distribution (e.g., uniform distribution) on to evenly populate the latent space with no apparent “holes”, such that smooth age transformation can be achieved:

(5)

where denotes the learnable parameters for the discriminator, denotes a random sample from uniform distribution , and denotes the binary distribution indicator.

To facilitate this process, we leverage a Multi-Layer Perceptron (MLP) as the discriminator , which is very simple to avoid typical GAN tricks. We further augment with an auxiliary agent to preserve identity information:

(6)

where denotes the identity ground truth.

3.2 Attention-based Face Rejuvenation/Aging

Photorealistic cross-age face images are important for face recognition with large age variance. A natural scheme is to generate reference age regressed/progressed faces from face images of arbitrary ages to match target age before feature extraction or serve as augmented data for learning discriminative models. We then propose a GAN-like Face Synthesis sub-Net (FSN) to learn a synthesis function that can achieve both face rejuvenation and aging in a holistic, end-to-end manner, as illustrated in Fig. 2.

In particular, FSN leverages the decoder (with learnable parameters ) as the generator: for cross-age face synthesis, where denotes the dimensionality of the age condition code concatenated with . The synthesized results present natural effects of rejuvenation/aging with robustness to large background complexity and bad lighting conditions through the carefully designed learning schema.

Formally, denote the age condition code as and the synthesized face image as . Then

(7)

The key requirements for include two aspects. 1) The synthesized face image should visually resemble a real one and preserve the desired rejuvenation/aging effect. 2) Attention should be paid to the most salient regions of the image that are responsible for synthesizing the novel aging phase while keeping the rest elements such as glasses, hats, jewelery and background untouched.

To this end, we propose to learn by minimizing the following composite losses:

(8)

where are weighting parameters among different losses.

is introduced to push the synthesized image to reside in the manifold of photorealistic age regressed/progressed face images, prevent blur effect, and produce visually pleasing results:

(9)

where denotes the learnable parameters for the discriminator, denotes the age condition code to transform into the age phase, and denotes a real face image with (almost) the same age with (not necessarily belong to the same person).

To facilitate this process, we modify a CNN backbone as a local-patch based discriminator to prevent from over-emphasizing certain image features to fool the current discriminator network. We further augment with an auxiliary agent to preserve the desired rejuvenation/aging effect. In this way, not only learns to render photorealistic samples but also learns to satisfy the target age encoded by :

(10)

where and

denote the estimated ages from

and , respectively.

is introduced to enforce the manifold consistency between the input-output space, defined as , where is the size of . is introduced as a regularization term on the synthesized results to reduce spiky artifacts:

(11)

In order to make the model focus on the most relevant features, we adopt to facilitate robustness enhancement via an attention mechanism:

(12)

where denotes the attention score map which serves as the guidance, and attends to the most relevant regions during cross-age face synthesis.

The final synthesized results can be obtained by

(13)

where

denotes the feature map predicted by the last fractionally-strided convolution block.

3.3 Training and Inference

The goal of AIM is to use sets of real targets to learn two GAN-like sub-nets that mutually boost each other and jointly accomplish age-invariant face recognition. Each separate loss serves as a deep supervision within the hinged structure benefiting network convergence. The overall objective function for AIM is

(14)

During testing, we simply feed the input face image and desired age condition code into AIM to obtain the disentangled age-invariant representation from and the synthesized age regressed/progressed face image from . Example results are visualized in Fig. 1.

4 Cross-Age Face Recognition Benchmark

Figure 3: Cross-Age Face Recognition (CAFR) dataset. Best viewed in color.
Dataset # Images # Subjects # Images/Subject Age Span Average Age
FG-NET [1] 1,002 82 avg. 12.22 0-69 15.84
MORPH Album1 [34] 1,690 515 avg. 3.28 15-68 27.28
MORPH Album2 [34] 78,207 20,569 avg. 3.80 16-99 32.69
CACD [6] 163,446 2,000 avg. 81.72 16-62 38.03
IMDB-WIKI [36] 523,051 20,284 avg. 25.79 0-100 38.00
AgeDB [28] 16,488 568 avg. 29.03 1-101 50.30
CAF [45] 313,986 4,668 avg. 67.26 0-80 29.00
CAFR 1,446,500 25,000 avg. 57.86 0-99 28.23
Table 1: Statistics for publicly available cross-age face datasets.

In this section, we introduce a new large-scale “Cross-Age Face Recognition (CAFR)” benchmark dataset to push the frontiers of age-invariant face recognition research with several appealing properties. 1) It contains 1,446,500 face images from 25,000 subjects annotated with age, identity, gender, race and landmark labels, which is larger and more comprehensive than previous similar attempts [1, 34, 6, 36, 28, 45]. 2) The images within CAFR are collected from real-world scenarios, involving humans with various expressions, poses, occlusion and resolution. 3) The background of images in CAFR is more complex and diverse than previous datasets. Some examples and statistics w.r.t. data distribution on the image number per age phase and the image number per subject are illustrated in Fig. 3 (a), (b) and (c), respectively.

4.1 Image Collection and Annotation

We select a sub-set from the celebrity name list of MS-Celeb-1M [15] for data collection based on below considerations. 1) Each individual must have many cross-age face images available on the Internet for retrieval. 2) Both gender balance and racial diversity should be considered. Accordingly, we manually specify some keywords (such as name, face image, event, year, etc.) to ensure the accuracy and diversity of returned results. Based on these specifications, corresponding cross-age face images are located by performing Internet searches over Google and Bing image search engines. For each identified image, the corresponding URL is stored in a spreadsheet. Automated scrapping software is used to download the cross-age imagery and stores all relevant information (e.g., identity) in a database. Moreover, a pool of self-collected children face images with age variations is also constructed to augment and complement Internet scraping results.

After curating the imagery, semi-automatic annotation is conducted with three steps. 1) Data cleaning. We perform face detection with an off-the-shelf algorithm [19] to filter the images without any faces and manually wipe off duplicated images and false positive images (i.e., faces that do not belong to that subject). 2) Data annotation. We combine the prior information on identity and apply off-the-shelf age estimator [36] and landmark localization algorithm [20] to annotate the ground truths on age, identity, gender, race and landmarks. 3) Manual inspection. After annotation, manual inspection is performed on all images and corresponding annotations to verify the correctness. In cases where annotations are erroneous, the information is manually rectified by 7 well-informed analysts. The whole work took around 2.5 months to accomplish by 10 professional data annotators.

4.2 Dataset Splits and Statistics

In total, there are 1,446,500 face images from 25,000 subjects in the CAFR dataset. Each subject has 57.86 face images on average. The statistical comparisons between our CAFR and existing cross-age datasets are summarized in Tab. 1. CAFR is the largest and most comprehensive benchmark dataset for age-invariant face recognition to date. Following random selection, we divide the data into 10 splits with a pair-wise disjoint of subjects in each split. Each split contains 2,500 subjects and we randomly generate 5 genuine and 5 imposter pairs for each subject with various age gaps, resulting in 25,000 pairs per split. The remained data are preserved for algorithm development and parameter selection. We suggest evaluation systems to report the average Accuracy (Acc), Equal Error Rate (EER), Area Under the Curve (AUC) and Receiver Operating Characteristic (ROC) curve as 10-fold cross validation.

5 Experiments

We evaluate AIM qualitatively and quantitatively under various settings for face recognition in the wild. In particular, we evaluate age-invariant face recognition performance on the CAFR dataset proposed in this work, as well as the MORPH [34], CACD [6] and FG-NET [1] benchmark datasets. We also evaluate unconstrained face recognition results on the IJB-C benchmark dataset [27] to verify the generalizability of AIM.

Implementation Details

We apply integrated Face Analytics Network (iFAN) [20] for face Region of Interest (RoI) extraction, 68 landmark localization (if not provided), and alignment; throughout the experiments, the sizes of the RGB image , the attention score map , the feature map , the synthesized face image are fixed as ; the pixel values of , and are normalized to [-1,1]; the sizes of the input local patches (w/o overlapping) to the discriminator are fixed as ; the dimensionality of learned facial representation and sample drawn from prior distribution are fixed as ; the age condition code

is a 7-dimension one-hot vector to encode different age phases

222We divide the whole age span into 7 age phases: 20, 20-25, 25-30, 30-40, 40-50, 50-60, 60.

, based on which continuous face rejuvenation/aging results can be achieved through interpolation during inference; the element of

is also confined to [-1,1], where -1 corresponds to 0; the element of smoothed labels for is ; the constraint factors are empirically fixed as 0.1, 0.1, 0.01, 1.0, 0.01, 0.05, 0.1, , 0.03, 0.01, 0.05, 0.1, and 0.03, respectively; the encoder is initialized with the Light CNN-29 [50]

architecture by eliminating the linear classifier and replacing the activation function of the last fully-connected layer with hyperbolic tangent; the decoder

is initialized with 3 hidden fractionally-strided convolution layers with kernels , and , activated with Retified Linear U

nit (ReLU

[10], appended with a convolution layer with kernel activated with sigmoid and a convolution layer with kernel activated with scaled sigmoid for attention score map and feature map prediction, respectively; the domain classifier and the regularizer are initialized with the same MLP architectures (which are learned separately), containing a hidden 256-way fully-connected layer activated with Leaky ReLU [26] and a final 7-way fully-connected layer; the discriminator is initialized with a MLP containing a hidden 256-way fully-connected layer activated with Leaky ReLU, appended with a 1-way fully-connected layer activated by sigmoid and a n-way fully-connected layer (n is the identity number of the training data) as the dual agents for and , respectively; the discriminator is initialized with a VGG-16 [38] architecture by eliminating the linear classifier, and appending a new 1-way fully-connected layer activated by sigmoid and a new 7-way fully-connected layer activated by hyperbolic tangent as the dual agents for and

, respectively; the newly added layers are randomly initialized by drawing weights from a zero-mean Gaussian distribution with standard deviation 0.01; Batch Normalization 

[17] is adopted in and ; the dropout [10] ratio is empirically fixed as 0.7; the weight decay and batch size are fixed as and 32, respectively; We use an initial learning rate of for pre-trained layers, and 2 for newly added layers in all our experiments; we decrease the learning rate to

of the previous one after 20 epochs and train the network for roughly 60 epochs one after another; the proposed network is implemented based on the publicly available TensorFlow 

[2] platform, which is trained using Adam (=2, =0.5) on two NVIDIA GeForce GTX TITAN X GPUs with 12G memory; the same training setting is utilized for all our compared network variants.

5.1 Evaluations on the CAFR Benchmark

Our newly proposed CAFR dataset is the largest and most comprehensive age-invariant face recognition benchmark to date, which contains 1,446,500 images annotated with age, identity, gender, race and landmarks. Examples are visualized in Fig. 3. The data are randomly organized into 10 splits, each consisting of 25,000 verification pairs with various age variations. Evaluation systems report Acc, EER, AUC and ROC as 10-fold cross validation.

5.1.1 Component Analysis and Quantitative Comparison

Model
Acc (%)
EER (%)
AUC (%)
Light CNN 73.561.39 31.621.68 75.961.63
[50]
Architecture ablation of AIM
w/o 78.851.39 21.971.18 86.771.01
w/o 80.391.19 20.221.25 88.520.82
w/o Att. 82.251.03 18.501.04 90.260.94
Training loss ablation of AIM
w/o 67.640.88 45.852.59 57.142.59
w/o 81.021.10 19.561.00 89.100.83
w/o 81.831.29 19.081.03 89.870.76
w/o 82.030.98 18.570.98 90.100.83
w/o 82.300.99 18.281.02 90.320.71
AIM 84.810.93 17.670.90 90.840.78
Table 2: Face recognition performance comparison on CAFR. The results are averaged over 10 testing splits.
Figure 4: ROC performance curve on (a) CAFR; (b) CACD-VS; (c) IJB-C. Best viewed in color.

We first investigate different architectures and loss function combinations of AIM to see their respective roles in age-invariant face recognition. We compare 10 variants from four aspects: baseline (Light CNN-29 [50]), different network structures (w/o , , w/o attention mechanism), different loss function combinations (w/o , , , , ), and our proposed AIM.

The performance comparison w.r.t. Acc, EER and AUC on CAFR is reported in Tab. 2. The corresponding ROC curve is provided in Fig. 4 (a). By comparing the results from the v.s. panels, we observe that our AIM consistently outperforms the baseline by a large margin: in Acc, in EER, and in AUC. Light-CNN is a general-purpose face recognition model, with representations entangled with age variations and suffering difficulties to distinguish cross-age faces. Comparatively, AIM jointly performs disentangled representation learning through cross-age domain adversarial training and cross-entropy regularization, and photorealistic cross-age face synthesis with attention mechanism in a mutual boosting way. By comparing the results from the v.s. panels, we observe that AIM consistently outperforms the 3 variants in terms of network structure. In particular, w/o refers to truncating the domain classifier from AIM, leading to , and performance drop for all metrics. This verifies the necessity of cross-age domain adversarial training, which promotes encoded features to be indistinguishable w.r.t. the shift between multi-age domains to facilitate age-invariant representation learning. w/o refers to truncating the cross-entropy regularizer from AIM, leading to , and performance drop for all metrics. This verifies the necessity of cross-entropy regularization with label smoothing strategy that constrains cross-age representations with ambiguous separability to serve as an auxiliary assistance for . The superiority of incorporating attention mechanism to cross-age face synthesis can be verified by comparing w/o Att. with AIM, i.e., , and differences for all metrics. Identity-preserving quality is crucial for face recognition applications, the superiority of which is verified by comparing w/o with AIM, i.e., , and decline for all metrics. The superiority of incorporating adversarial learning to specific process can be verified by comparing w/o , with AIM, i.e., , and ; , and decrease for all metrics. The superiorities of incorporating age estimation and manifold consistency constraints are verified by comparing w/o and w/o with AIM, i.e., , and ; , and drop for all metrics.

5.1.2 Qualitative Comparison

Figure 5: Facial attributes transformation over time in terms of (a) wrinkles & eyes, (b) mouth & moustache and (c) laugh lines, which is automatically learned by AIM instead of physical modelling.
Figure 6: Illustration of learned face manifold with continuous transitions in age (horizontal axis) and identity (vertical axis).
Figure 7: Qualitative comparison of face rejuvenation/aging results on CAFR, MORPH, CACD, FG-NET and IJB-C.
Figure 8: Age-invariant face recognition example results on CAFR. Col. 1 & 18: Input faces of distinct identities with various challenging factors (e.g., neutral, illumination, expression, and pose). Col. 2, 3, 4, 5, 6, 7, 8, 11, 12, 13, 14, 15, 16, 17: Synthesized age regressed/progressed faces by our proposed AIM. Col. 9 & 10: Learned facial representations by AIM, which are explicitly disentangled from age variations. These examples indicate facial representations learned by AIM are robust to age variance, and synthesized cross-age face images retain the intrinsic details. Best viewed in color.

Most previous works on age-invariant face recognition address this problem considering either only robust representation learning or only face rejuvenation/aging. It is commonly believed simultaneously modeling both is a highly non-linear transformation, thus it is difficult for a model to learn discriminative and age-invariant facial representations while generating faithful cross-age face images. However, with enough training data and proper architecture and objective function design of AIM, it is feasible to take the best of both worlds, as shown in Fig. 

1. For more detailed results across a wide range of ages in high resolution, please refer to Fig. 8. Our AIM consistently provides discriminative and age-invariant representations and high-fidelity age regressed/progressed face images for all cases. This well verifies that the joint learning scheme of age-invariant representation and attention-based cross-age face synthesis is effective, and both results are beneficial to face recognition in the wild.

We then visually compare the qualitative face rejuvenation and aging results by our AIM with previous state-of-the-art method CAAE [54] in Fig. 7 block and showcase the facial detail transformation over time with AIM in Fig. 5. It can be observed that AIM achieves simultaneous face rejuvenation and aging with photorealistic and accurate age transformation effect (e.g., wrinkles, eyes, mouth, moustache, laugh lines), thanks to the novel network structure and training strategy. In contrast, results of previous work may suffer from blur and ghosting artifacts, and be fragile to variations in illumination, expression and pose. This further shows effectiveness of the proposed AIM.

To demonstrate the capacity of AIM to synthesize cross-age face images with continuous and smooth transition between identities and ages, and show that the learned representations are identity-specific and explicitly disentangled from age variations, we further visualize the learned face manifold in Fig. 6 by performing interpolation upon both and . In particular, we take two images of different subjects and , extract the encoded features from and perform interpolation between and . We also interpolate between two neighboring age condition codes to generate face images with continuous ages. The interpolated and are then fed to to synthesize face images. These smooth semantic changes indicate that the model has learned to produce identity-specific representations disentangled from age variations for age-invariant face recognition.

Figure 9: Age-invariant face recognition analysis on CAFR split1.

Finally, we visualize the cross-age face verification results for CAFR split1 to gain insights into age-invariant face recognition with AIM. After computing the similarities for all pairs of probe and reference sets, we sort the results into a ranking list. Each row shows a probe and reference pair. Between pairs are the matching similarities. Fig. 9 (a) and (b) show the best matched and non-matches examples, respectively. We note that most of these cases are under mild conditions in terms of age gap and other unconstrained factors like resolution, expression and pose. Fig. 9 (c) and (d) show the worst matched and non-matched examples, respectively, representing failed matching. We note that most of error cases are with large age gaps blended with other challenging scenarios like blur, extreme expressions, heavy make-up and large poses, which are even hard for humans to recognize. This confirms that CAFR aligns well with reality and deserves more research attention.

5.2 Evaluations on the MORPH Benchmark

Method Setting-1/Setting-2
HFA [13] 91.14/-
CARC [5] 92.80/-
MEFA [14] 93.80/-
GSM [24] -/94.40
MEFA+SIFT+MLBP [14] 94.59/-
LPS+HFA [22] 94.87/-
LF-CNN [48] 97.51/-
AE-CNN [57] -/98.13
OE-CNN [45] 98.55/98.67
AIM (Ours) 99.13/98.81
AIM + CAFR (Ours) 99.65/99.26
Table 3: Rank-1 recognition rates (%) on MORPH Album2.

MORPH is a large-scale public longitudinal face database, collected in real-world conditions with variations in age, pose, expression and lighting conditions. It has two separate datasets: Album1 and Album2. Album 1 contains 1,690 face images from 515 subjects while Album 2 contains 78,207 face images from 20,569 subjects. Statistical details are provided in Tab. 1. Both albums include meta data for age, identity, gender, race, eye coordinates and date of acquisition. For fair comparisons, Album2 is used for evaluation. Following [23, 13], Album2 is partitioned into a training set of 20,000 face images from 10,000 subjects with each subject represented by two images with largest gap, and an independent testing set consisting of a gallery set and a probe set from the remaining subjects under two settings. Setting-1 consists of 20,000 face images from 10,000 subjects with each subject represented by a youngest face image as gallery and an oldest face image as probe while Setting-2 consists of 6,000 face images from 3,000 subjects with the same criteria. Evaluation systems report the Rank-1 identification rate.

The face recognition performance comparison of the proposed AIM with other state-of-the-arts on MORPH [34] Album2 in Setting-1 and Setting-2 is reported in Tab. 3. With the mutual boosting learning scheme of age-invariant representation and attention-based cross-age face synthesis, our method outperforms the -best by 0.58% and 0.14% for Setting-1 and Setting-2, respectively. By incorporating CAFR during training, the rank-1 recognition rates are further improved by 0.52% and 0.45% for Setting-1 and Setting-2, respectively. This confirms that our AIM is highly effective and the proposed CAFR dataset is beneficial for advancing age-invariant face recognition performance. Visual comparison of face rejuvenation/aging results by AIM and CAAE [54] is provided in Fig. 7 block, also validating advantages of AIM over existing solutions.

5.3 Evaluations on the CACD Benchmark

Method Acc (%)
CAN [52] 92.30
VGGFace [30] 96.00
Center Loss [49] 97.48
MFM-CNN [50] 97.95
LF-CNN [48] 98.50
Marginal Loss [11] 98.95
DeepVisage [16] 99.13
OE-CNN [45] 99.20
Human, avg. [6] 85.70
Human, voting [6] 94.20
AIM (Ours) 99.38
AIM + CAFR (Ours) 99.76
Table 4: Face recognition performance comparison on CACD-VS.

CACD is a large-scale public dataset for face recognition and retrieval across ages, with variations in age, illumination, makeup, expression and pose, aligned with the real-world scenarios better than MORPH [34]. It contains 163,446 face images from 2,000 celebrities. Statistical details are provided in Tab. 1. The meta data include age, identity and landmark. However, CACD contains some incorrectly labeled samples and duplicate images. For fair comparison, following [6], a carefully annotated version CACD Verification Sub-set (CACD-VS) is used for evaluation. It consists of 10 splits including 4,000 image pairs in total. Each split contains 200 genuine pairs and 200 imposter pairs for cross-age verification task. Evaluation systems report Acc and ROC as 10-fold cross validation.

The face recognition performance comparison of the proposed AIM with other state-of-the-arts on CACD-VS [6] is reported in Tab. 4. The corresponding ROC curve is provided in Fig. 4 (b). Our method dramatically surpasses human performance and other state-of-the-arts. In particular, AIM improves the Acc of the -best by 0.18%. AIM also outperforms human voting performance by 5.18%. To our best knowledge, this is the new state-of-the-art, including unpublished technical reports. This shows the learned facial representations by AIM are discriminative and robust even with in-the-wild variations. With the injection of CAFR as augmented training data, our method further gains 0.38%. Visual comparison of face rejuvenation/aging results by AIM and four state-of-the-art methods is provided in Fig. 7 block, which again verifies effectiveness of our method for high-fidelity cross-age face synthesis.

5.4 Evaluations on the FG-NET Benchmark

Method
Rank-1 (%)
Park et al. [29] 37.40
Li et al. [23] 47.50
HFA [13] 69.00
MEFA [14] 76.20
CAN [52] 86.50
LF-CNN [48] 88.10
AIM (Ours) 93.20
Table 5: Face recognition performance comparison on FG-NET.

FG-NET is a popular public dataset for cross-age face recognition, collected in realistic conditions with huge variability in age covering from child to elder. It contains 1,002 face images from 82 non-celebrity subjects. Statistical details are provided in Tab. 1. The meta data include age, identity and landmark. Since the size of FG-NET is small, we follow the leave-one-out setting of [23, 13] for fair comparisons with previous methods. In particular, we leave one image as the testing sample and train (finetune) the model with remaining 1,001 images. We repeat this procedure 1,002 times and report the average rank-1 recognition rate.

The face recognition performance comparison of the proposed AIM with other state-of-the-arts on FG-NET [1] is reported in Tab. 5. AIM improves the -best by 5.10%. Qualitative comparisons for face rejuvenation/aging are provided in Fig. 7 block, which well shows the promising potential of our method for challenging unconstrained face recognition contaminated with age variance.

5.5 Evaluations on the IJB-C Benchmark

Method
TAR@FAR=
TAR@FAR=
TAR@FAR=
TAR@FAR=
GOTS [27] 0.066 0.147 0.330 0.620
FaceNet [37] 0.330 0.487 0.665 0.817
VGGFace [30] 0.437 0.598 0.748 0.871
VGGFace2_ft [4] 0.768 0.862 0.927 0.967
MN-vc [51] 0.771 0.862 0.927 0.968
AIM 0.826 0.895 0.935 0.962
Table 6: Face recognition performance comparison on IJB-C.

IJB-C contains 31,334 images and 11,779 videos from 3,531 subjects, which are split into 117,542 frames, 8.87 images and 3.34 videos per subject, captured from in-the-wild environments to avoid the near frontal bias. For fair comparison, we follow the template-based setting and evaluate models on the standard 1:1 verification protocol in terms of True Acceptance Rate (TAR)@False Acceptance Rate (FAR).

The face recognition performance comparison of the proposed AIM with other state-of-the-arts on IJB-C [27] unconstrained face verification protocol is reported in Tab. 6. The corresponding ROC curve is provided in Fig. 4 (c). Our AIM beats the -best by 5.50% in TAR@FAR=, which verifies its remarkable generalizability for recognizing faces in the wild. Qualitative comparisons for face rejuvenation/aging are provided in Fig. 7 block, which further shows the superiority of our method for cross-age face synthesis under unconstrained condition.

6 Conclusion

We proposed a novel Age-Invariant Model (AIM) for joint disentangled representation learning and photorealistic cross-age face synthesis to address the challenging face recognition with large age variations. Through carefully designed network architecture and optimization strategies, AIM learns to generate powerful age-invariant facial representations explicitly disentangled from the age variation while achieving continuous face rejuvenation/aging with remarkable photorealistic and identity-preserving properties, avoiding requirements of paired data and true age of testing samples. Moreover, we propose a new large-scale Cross-Age Face Recognition (CAFR) dataset to spark progress in age-invariant face recognition. Comprehensive experiments demonstrate the superiority of AIM over the state-of-the-arts. We envision the proposed method and benchmark dataset would drive the age-invariant face recognition research towards real-world applications with presence of age gaps and other complex unconstrained distractors.

Acknowledgement

The work of Jian Zhao was partially supported by China Scholarship Council (CSC) grant 201503170248.

The work of Junliang Xing was partially supported by the National Science Foundation of Chian 61672519.

The work of Jiashi Feng was partially supported by NUS IDS R-263-000-C67-646, ECRA R-263-000-C87-133 and MOE Tier-II R-263-000-D17-112.

References

  • [1] Fg-net aging database. http://webmail.cycollege.ac.cy/∼alanitis/fgnetaging/, 2007.
  • [2] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al.

    Tensorflow: A system for large-scale machine learning.

  • [3] D. M. Burt and D. I. Perrett. Perception of age in adult caucasian male faces: Computer graphic manipulation of shape and colour information. Proc. R. Soc. Lond. B, 259(1355):137–143, 1995.
  • [4] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman. Vggface2: A dataset for recognising faces across pose and age. In FG, pages 67–74, 2018.
  • [5] B.-C. Chen, C.-S. Chen, and W. H. Hsu. Cross-age reference coding for age-invariant face recognition and retrieval. In ECCV, pages 768–783, 2014.
  • [6] B.-C. Chen, C.-S. Chen, and W. H. Hsu. Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset. T-MM, 17(6):804–815, 2015.
  • [7] D. Chen, X. Cao, F. Wen, and J. Sun. Blessing of dimensionality: High-dimensional feature and its efficient compression for face verification. In CVPR, pages 3025–3032, 2013.
  • [8] J. Chen, V. M. Patel, L. Liu, V. Kellokumpu, G. Zhao, M. Pietikäinen, and R. Chellappa. Robust local features for remote face recognition. IVC, 64:34–46, 2017.
  • [9] Y. Cheng, J. Zhao, Z. Wang, Y. Xu, J. Karlekar, S. Shen, and J. Feng. Know you at one glance: A compact vector representation for low-shot learning. In ICCVW, pages 1924–1932, 2017.
  • [10] G. E. Dahl, T. N. Sainath, and G. E. Hinton.

    Improving deep neural networks for lvcsr using rectified linear units and dropout.

    In ICASSP, pages 8609–8613, 2013.
  • [11] J. Deng, Y. Zhou, and S. Zafeiriou. Marginal loss for deep face recognition. In CVPRW, volume 4, 2017.
  • [12] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. JMLR, 17(59):1–35, 2016.
  • [13] D. Gong, Z. Li, D. Lin, J. Liu, and X. Tang. Hidden factor analysis for age invariant face recognition. In ICCV, pages 2872–2879, 2013.
  • [14] D. Gong, Z. Li, D. Tao, J. Liu, and X. Li. A maximum entropy feature descriptor for age invariant face recognition. In CVPR, pages 5289–5297, 2015.
  • [15] Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In ECCV, pages 87–102, 2016.
  • [16] M. A. Hasnat, J. Bohné, J. Milgram, S. Gentric, and L. Chen. Deepvisage: Making face recognition simple yet with powerful generalization skills. In ICCVW, pages 1682–1691, 2017.
  • [17] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  • [18] I. Kemelmacher-Shlizerman, S. Suwajanakorn, and S. M. Seitz. Illumination-aware age progression. In CVPR, pages 3334–3341, 2014.
  • [19] J. Li, L. Liu, J. Li, J. Feng, S. Yan, and T. Sim. Towards a comprehensive face detector in the wild. IEEE Transactions on Circuits and Systems for Video Technology, 2017.
  • [20] J. Li, S. Xiao, F. Zhao, J. Zhao, J. Li, J. Feng, S. Yan, and T. Sim. Integrated face analytics networks through cross-dataset hybrid training. In ACM MM, pages 1531–1539, 2017.
  • [21] J. Li, J. Zhao, F. Zhao, H. Liu, J. Li, S. Shen, J. Feng, and T. Sim. Robust face recognition with deep multi-view representation learning. In Proceedings of the 2016 ACM on Multimedia Conference, pages 1068–1072. ACM, 2016.
  • [22] Z. Li, D. Gong, X. Li, and D. Tao. Aging face recognition: a hierarchical learning model based on local patterns selection. T-IP, 25(5):2146–2154, 2016.
  • [23] Z. Li, U. Park, and A. K. Jain. A discriminative model for age invariant face recognition. T-IFS, 6(3):1028–1037, 2011.
  • [24] L. Lin, G. Wang, W. Zuo, X. Feng, and L. Zhang. Cross-domain visual matching via generalized similarity measure and feature learning. IEEE transactions on pattern analysis and machine intelligence, 39(6):1089–1102, 2017.
  • [25] H. Ling, S. Soatto, N. Ramanathan, and D. W. Jacobs. Face verification across age progression using discriminative methods. T-IFS, 5(1):82–91, 2010.
  • [26] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML, volume 30, page 3, 2013.
  • [27] B. Maze, J. Adams, J. A. Duncan, N. Kalka, T. Miller, C. Otto, A. K. Jain, W. T. Niggel, J. Anderson, J. Cheney, et al. Iarpa janus benchmark–c: Face dataset and protocol. In ICB, 2018.
  • [28] S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, and S. Zafeiriou. Agedb: The first manually collected, in-the-wild age database. In CVPRW, pages 1997–2005, 2017.
  • [29] U. Park, Y. Tong, and A. K. Jain. Age-invariant face recognition. T-PAMI, 32(5):947–954, 2010.
  • [30] O. M. Parkhi, A. Vedaldi, A. Zisserman, et al. Deep face recognition. In BMVC, volume 1, page 6, 2015.
  • [31] N. Ramanathan and R. Chellappa. Face verification across age progression. T-IP, 15(11):3349–3361, 2006.
  • [32] N. Ramanathan and R. Chellappa. Modeling age progression in young faces. In CVPR, volume 1, pages 387–394, 2006.
  • [33] N. Ramanathan and R. Chellappa. Modeling shape and textural variations in aging faces. In FG, pages 1–8, 2008.
  • [34] K. Ricanek and T. Tesafaye. Morph: A longitudinal image database of normal adult age-progression. In FGR, pages 341–345, 2006.
  • [35] R. Rothe, R. Timofte, and L. V. Gool. Dex: Deep expectation of apparent age from a single image. In ICCVW, 2015.
  • [36] R. Rothe, R. Timofte, and L. Van Gool. Dex: Deep expectation of apparent age from a single image. In ICCVW, pages 10–15, 2015.
  • [37] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, pages 815–823, 2015.
  • [38] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [39] J. Song, J. Zhang, L. Gao, X. Liu, and H. T. Shen. Dual conditional gans for face aging and rejuvenation. In IJCAI, pages 899–905, 2018.
  • [40] D. Sungatullina, J. Lu, G. Wang, and P. Moulin. Multiview discriminative learning for age-invariant face recognition. In FG, pages 1–6, 2013.
  • [41] J. Suo, X. Chen, S. Shan, W. Gao, and Q. Dai. A concatenational graph evolution aging model. T-PAMI, 34(11):2083–2096, 2012.
  • [42] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In CVPR, pages 1701–1708, 2014.
  • [43] H. Wang, Y. Wang, Z. Zhou, X. Ji, and W. Liu. Cosface: Large margin cosine loss for deep face recognition. In CVPR, pages 5265–5274, 2018.
  • [44] W. Wang, Z. Cui, Y. Yan, J. Feng, S. Yan, X. Shu, and N. Sebe. Recurrent face aging. In CVPR, pages 2378–2386, 2016.
  • [45] Y. Wang, D. Gong, Z. Zhou, X. Ji, H. Wang, Z. Li, W. Liu, and T. Zhang. Orthogonal deep features decomposition for age-invariant face recognition. In ECCV, 2018.
  • [46] Y. Wang, Z. Zhang, W. Li, and F. Jiang.

    Combining tensor space analysis and active appearance models for aging effect simulation on face images.

    IEEE T SYST MAN CY B, 42(4):1107–1118, 2012.
  • [47] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. JMLR, 10(Feb):207–244, 2009.
  • [48] Y. Wen, Z. Li, and Y. Qiao.

    Latent factor guided convolutional neural networks for age-invariant face recognition.

    In CVPR, pages 4893–4901, 2016.
  • [49] Y. Wen, K. Zhang, Z. Li, and Y. Qiao. A discriminative feature learning approach for deep face recognition. In ECCV, pages 499–515, 2016.
  • [50] X. Wu, R. He, Z. Sun, and T. Tan. A light cnn for deep face representation with noisy labels. T-IFS, 13(11):2884–2896, 2018.
  • [51] W. Xie and A. Zisserman. Multicolumn networks for face recognition. arXiv preprint arXiv:1807.09192, 2018.
  • [52] C. Xu, Q. Liu, and M. Ye. Age invariant face recognition and retrieval by coupled auto-encoder networks. Neurocomputing, 222:62–71, 2017.
  • [53] H. Yang, D. Huang, Y. Wang, H. Wang, and Y. Tang. Face aging effect simulation using hidden factor analysis joint sparse representation. T-IP, 25(6):2493–2507, 2016.
  • [54] S. Y. Zhang, Zhifei and H. Qi.

    Age progression/regression by conditional adversarial autoencoder.

    In CVPR, 2017.
  • [55] J. Zhao, L. Xiong, Y. Cheng, Y. Cheng, J. Li, L. Zhou, Y. Xu, J. Karlekar, S. Pranata, S. Shen, J. Xing, S. Yan, and J. Feng. 3d-aided deep pose-invariant face recognition. In IJCAI, pages 1184–1190. International Joint Conferences on Artificial Intelligence Organization, 2018.
  • [56] J. Zhao, L. Xiong, P. K. Jayashree, J. Li, F. Zhao, Z. Wang, P. S. Pranata, P. S. Shen, S. Yan, and J. Feng. Dual-agent gans for photorealistic and identity preserving profile face synthesis. In NIPS, pages 66–76, 2017.
  • [57] T. Zheng, W. Deng, and J. Hu. Age estimation guided convolutional neural network for age-invariant face recognition. In CVPRW, pages 12–16, 2017.
  • [58] H. Zhu, Q. Zhou, J. Zhang, and J. Z. Wang. Facial aging and rejuvenation by conditional multi-adversarial autoencoder with ordinal regression. arXiv preprint arXiv:1804.02740, 2018.