Age progression/regression, also known as age synthesis, aims to aesthetically render given faces with aging or rejuvenating effect but still preserve personality. In recent years, age synthesis has become a hot topic in computer vision. It has a wide range of applications in various domains, e.g., finding missing person, age estimation, age-invariant verification, social entertainment, etc. However, due to the extreme challenges involving diverse genetics and living styles, rigid requirement for training datasets and large variation in illumination, age progression/regression is still a challenging task. In prior works, direct and step-by-step aging synthesis are mainly used for age progression/regression. In direct aging synthesis, target age faces can be directly synthesized utilizing the relationships between input faces and their corresponding target age labels. Meanwhile, the step-by-step synthesis usually splits the long-term aging process into short-terms and merely concentrates on the transformation between adjacent age groups. Although the direct synthesis methods are easy to train, they can not synthesize satisfactory results with a long age span and often lose the original identity, while the step-by-step synthesis methods need to train a specific network for every two transformed age groups.
In this paper, the proposed method is based on Conditional Generative Adversarial Nets (CGAN) , which can simultaneously achieve age progression and regression in the same framework with the given age labels. To remedy identity missing of direct synthesis methods and enhance the accuracy of age synthesis, an identity preserving loss and age preserving loss are also adopted in this method. Moreover, inspired by the observation that for adults, significant aging changes mainly occur in texture of facial subregions and only small changes happen in global facial configurations, we further propose GLCA-GAN with one global network generating the whole facial structure and three local networks processing diverse texture transformation of crucial facial subregions. We also employ a pixel loss to preserve detailed facial information of the input face. In addition, to further preserve most of the details in age-attribute-irrelevant areas, our generator learns the residual face, which is defined as the difference between the input face and its corresponding synthetic face.
The main contributions of this work are as follows:
1) We propose GLCA-GAN for age progression and regression. In GLCA-GAN, one global network is used for generating the whole facial structure and simulating the aging trend of the whole face, while three local networks are employed for processing diverse texture transformation of crucial facial subregions.
2) We impose an age preserving loss to enhance the accuracy of age synthesis. To further ensure that the synthetic face and input face are belong to the same person, an identity preserving loss is adopted. In addition, a pixel loss is also used to preserve detailed facial information.
3) Instead of manipulating a whole face, our generator learns the residual face between the input face and its corresponding synthetic face, which can accelerate the convergence process while preserving most of the details in age-attribute-irrelevant areas, e.g., clothes, background, etc.
4) Our network can simultaneously achieve age progression and regression in the same framework with the given age labels and generate favorable results.
Ii Related work
Age progression/regression models can be further divided into physical model-based approaches, prototype-based approaches, and deep learning-based approaches.
The physical model-based approaches [4, 5, 6, 7, 8], simulate age progression/regression by modeling human biological and physical mechanisms, e.g., muscles, facial contours, respectively. However, this approach needs many face sequences of the same person covering a long age span, which is hard to obtain.
Meanwhile, prototype-based approaches [9, 10] regard average faces of each age group as their prototypes, thus the aging pattern is the difference between prototypes of target and source age groups. However, prototype-based approaches may ignore personality, e.g., wrinkles. For this problem, Shu et al. proposed an age synthesis based on aging dictionary learning to reconstruct the aging face utilizing an aging dictionaries of different age groups.
proposed a recurrent neural network to make a smoother face aging process. Zhang et al.
applied Conditional Adversarial Autoencoder (CAAE) to synthesize target age faces with target age labels. In addition, Zhou et al. argued that occupation information may influence the personal aging process and proposed an occupational-aware adversarial face aging network. To make the most of the image generation ability of GAN, Yang et al. put forward a muti-pathway discriminator to refine detailed aging process. Duong et al. presented a generative probabilistic model to simulate the aging process of each age stage.
In this section, we describe the proposed GLCA-GAN, which contains four parts: a generator, a discriminator, an identity preserving network and an age preserving network. The architecture of our method is shown in Fig. 1.
Our generator contains a global network and three local networks to learn the whole facial structure and imitate subtle changes of crucial facial subregions simultaneously. To further accelerate the convergence of generator and preserve most of the details of age-attribute-irrelevant areas, our generator learns the residual face between the input face and its corresponding synthetic face. Moreover, the input of our generator is a 128 128 RGB face image and we concatenate the target age label to it.
The global network takes the whole image as its input to generate the whole facial structure. First, we utilize three strided convolutional layers to map the input face into a latent space. Then a residual network with four residual blocks acts as a transformer to convert the input embeddings to the target embeddings. After two fractionally-strided convolutional layers, the final output of global network is.
In addition, we crop three subregions of eyes, snouts, and forehead from the whole face as inputs to three local networks respectively to imitate texture changes of these crucial facial parts. The input local patch is first processed by two strided convolutional layers, followed by two residual blocks, and after one fractionally-strided convolutional layer, the final output of local network is .
At last, we fuse the four feature embeddings of global and local networks and feed it to three convolutional layers with stride of 1 to get the residual face. The final output is the adding result of the input and residual faces.
Based on the GAN principle, our discriminator forces the generator to synthesize realistic and plausible faces. Particularly, we impose age labels on discriminator , further forcing the generation of age-specific faces. Both the input face and synthetic face with target age are treated as negative samples, while the real face with target age are as positive samples. To avoid producing artifacts as in , our discriminator network distinguishes the local image patches separately and sums the four local cross-entropy losses as the adversarial loss, which can be formulated as:
where denotes the one-hot age label and denotes the real face of age group . The data distributions are denoted as and . Parameters of generator and discriminator are trained alternately to optimize the min-max problem.
Iii-C Identity Preserving Loss
For age progression/regression, it is crucial to keep identity information in the synthesis processes. However, the synthetic faces based on GAN are close to real data only in pixel space, not in semantic space. Hence, we introduce an identity preserving loss to our model. Furthermore, to preserve finer identity information, we extend the global feature map constraint as . Then the identity preserving loss can be formulated as:
where and denote the feature extractors of the fully-connected layer and the last pooling layer of the pre-trained light CNN-29 network  respectively.
Iii-D Age Preserving Loss
Aging accuracy is another key issue of age progression/regression. In this method, an age preserving loss based on the pre-trained light CNN-29 structure is utilized to enhance the age accuracy of the synthetic face, which can be formulated as:
denotes the output of the final softmax layer of light CNN-29,denotes the class of the i-th training data. And the numbers of training data and age categories are denoted as , respectively. In addition, when the class of the i-th training data is equal to the j-th class, .
Iii-E Pixel Loss
To improve the quality of the synthesis face and further preserve the details in age-attribute-irrelevant areas, a pixel-wise loss is adopted, which can be formulated as:
where , , are the channel, height and width size of the tensor face image respectively. For there is no ground-truth data in our experiment, the synthetic face is forced by the pixel-wise loss to have similar content with the input face. However, it is also essential to make aging or rejuvenating effect on the age-attribute-relevant areas. Thus, we update pixel-level critic at every 5 iteration to balance aging accuracy and identity permanence, which are regarded as the two critical requirements in age progression/regression .
Taking all loss functions together, the objective function can be expressed as:
where , , , are trade-offs.
Iv-a Data Collection
We collect face images from three available face aging datasets, CACD dataset  , Morph (Album2) dataset  and FG-NET dataset . Our model is mainly trained and verified on CACD and Morph datasets, while evaluated on FG-NET dataset. The Morph dataset is the largest publicly available longitudinal face database, which contains 55,349 color images of 13,672 subjects with age and gender information. The subject ages of Morph dataset range from 16 to 77 years old. The CACD dataset contains 163,346 color images of 20,000 subjects collected from Internet. The subject age of CACD ranges from 14 to 62 years old. The FG-NET dataset contains 1,002 images of 82 subjects. To train a high-accuracy aging network, it is important to collect exactly labeled face data. As the age information in CACD is not accurate, we manually removed some mismatched images. Finally, 153,106 images of 20,000 subjects are used in our experiment.
Iv-B Implementation Details
In this experiment, we perform five-fold cross-validation on CACD dataset and Morph dataset respectively. More specifically, we divide each dataset into five folds with one fold being used for testing and the other four folds for training. FG-NET is also adopted as testing set to make comparisons with prior works.
We align the faces of Morph, CACD and FG-NET based on five facial landmarks and crop the aligned faces to pixels. As both CACD and Morph have limited number of faces older than 62 years old or faces younger than 16 years old, we just simulate age progression/regression on faces between 14 to 62 years old. We divide the face data into four age groups: 14-30, 31-40, 41-50, 51-62. During training, we choose Adam optimizer with of 0.5, of 0.99 and learning rate of , and our batch size is set to 10.
For Morph, the trade-off parameters , , and are set to 1.00, 0.005, 20.00, 10.00 respectively; for CACD, they are set to 2.00, 0.01, 25.00, 15.00. In addition, we update pixel-level critic at every 5 iteration, and update the discriminator for every 2 generator iterations.
Iv-C Performance Comparison
Iv-C1 Age Progression and Regression
With an input face and its target age label, GLCA-GAN can directly synthesize target age faces. For age progression, the input faces are under 30 years old. As shown in Fig. 2 and Fig. 3, the identities of all subjects are well preserved, furthermore, the synthetic faces are gradually getting older with deeper nasolabial folds and crow’s feet, more white beards and hairs and so on. More importantly, our GLCA-GAN can achieve realistic age progression even on profiles. Meanwhile, for age regression, the input faces are beyond 51 years old and the results are shown in Fig. 4 and Fig. 5. The GLCA-GAN can turn white beards and hairs into black, vanish wrinkles and so on. During both face aging and rejuvenating, the GLCA-GAN is robust to pose, expression and illumination variations. Detailed facial information, like moles are also well preserved. In addition, changes in age synthesis processes are smooth and consistent.
Iv-C2 Identity Preserving
We evaluate the identity preserving performance of GLCA-GAN on Morph dataset. For age progression, in each fold, about 1,316 faces from different subjects under 30 years old are used as gallery set, while approximately 4,945 synthetic faces are as probe set. For age regression, about 209 faces from different subjects over 51 years old and approximately 669 synthetic faces are used as gallery set and probe set respectively. Light CNN-29 is employed as recognition model in this expriment. As shown in Table 1, the average rank-1 recognition rate of age progression for aged1, aged2, aged3 clusters are 97.66%, 96.67%, 91.85%; meanwhile, the average rank-1 recognition rate of age regression for aged2, aged1, aged0 clusters are 99.64%, 99.04%, 98.89%. As the age gap between the original face and synthetic face increases, the recognition rate decreases, which indirectly proves the accuracy of our age synthesis.
|Progression||97.66 0.32||96.67 0.58||91.85 2.32|
|Regression||99.64 0.35||99.04 0.48||98.89 0.60|
Results of face recognition on MORPH(%).
Iv-C3 Contributions of Local Networks
To prove the effectiveness of our local networks, we only use global network to synthesize target age faces and keep others consistent with GLCA-GAN. We can see in Fig. 6, both the backgrounds and face contours of input faces are well preserved in the one-global generator and our GLCA-GAN, but our GLCA-GAN works better on synthesizing facial textures, which is probably because of the use of 3 local networks.
Iv-C4 Comparison with Prior Work
We compare our synthetic results with prior works, including HFA: hidden factor analysis , FT demo: Face Transformer (FT) demo , CDL: coupled dictionary learning , RFA: recurrent face aging , CAAE: conditional adversarial autoencoder , and C-GAN: contextual generative adversarial nets . Furthermore, for fair comparison, we choose the same faces with their works as our input, and directly cite their synthetic results as most of prior works. We can see from Fig. 7 and Fig. 8 that [24, 14, 25, 11, 13, 26] only focus on cropped faces. In addition, both texture and crucial region changes of their synthetic faces are not clear. However, in our results, hairs and beards of man become grey along in aging process, while in rejuvenating process, hairs and beards of man turn black. Moreover, our network can simultaneously achieve age progression and regression in the same framework and generate favorable results with background well preserved.
In this paper, we have proposed a novel Global and Local Consistent Age Generative Adversarial Network (GLCA-GAN) for age progression and regression. Our generator contains one global network and three local networks to learn the whole facial structure and imitate subtle changes of crucial facial subregions simultaneously. To accelerate the convergence and preserve most of the details in age-attribute-irrelevant areas, our generator learns the residual face instead of the whole face. We further introduce an age preserving loss to constraint the synthesis of age-specific faces. Furthermore, an identity preserving loss is imposed to make sure that the input face and synthetic face are of the same person. The pixel loss is also adapted to preserve detailed facial information. Experimental results on CACD, Morph and FG-NET demonstrate the flexibility, generality and efficiency of our method for age progression and regression.
Y. Su, S. Shan, X. Chen, and W. Gao, “Hierarchical ensemble of global and local classifiers for face recognition,”IEEE TIP, vol. 18, no. 8, pp. 1885–1896, 2009.
-  C. N. Duong, K. G. Quach, K. Luu, T. Le, and M. Savvides, “Learning from longitudinal face demonstration-where tractable deep modeling meets inverse reinforcement learning,” arXiv preprint arXiv:1711.10520, 2017.
-  M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.
-  M.-H. Tsai, Y.-K. Liao, and I.-C. Lin, “Human face aging with guided prediction and detail synthesis,” MTA, vol. 72, no. 1, pp. 801–824, 2014.
-  J. Suo, S.-C. Zhu, S. Shan, and X. Chen, “A compositional and dynamic model for face aging,” IEEE TPAMI, vol. 32, no. 3, pp. 385–401, 2010.
-  J. T. Todd, L. S. Mark, R. E. Shaw, J. B. Pittenger et al., “The perception of human growth,” SA, vol. 242, no. 2, pp. 132–144, 1980.
-  J. Suo, X. Chen, S. Shan, W. Gao, and Q. Dai, “A concatenational graph evolution aging model,” IEEE TPAMI, vol. 34, no. 11, pp. 2083–2096, 2012.
-  N. Ramanathan and R. Chellappa, “Modeling age progression in young faces,” in CVPR, 2006.
-  B. Tiddeman, M. Burt, and D. Perrett, “Prototyping and transforming facial textures for perception research,” IEEE CGA, vol. 21, no. 5, pp. 42–50, 2001.
-  I. Kemelmacher-Shlizerman, S. Suwajanakorn, and S. M. Seitz, “Illumination-aware age progression,” in CVPR, 2014.
-  X. Shu, J. Tang, H. Lai, L. Liu, and S. Yan, “Personalized age progression with aging dictionary,” in ICCV, 2015.
C. Nhan Duong, K. Luu, K. Gia Quach, and T. D. Bui, “Longitudinal face modeling via temporal deep restricted boltzmann machines,” inCVPR, 2016.
-  W. Wang, Z. Cui, Y. Yan, J. Feng, S. Yan, X. Shu, and N. Sebe, “Recurrent face aging,” in CVPR, 2016.
-  Z. Zhang, Y. Song, and H. Qi, “Age progression/regression by conditional adversarial autoencoder,” arXiv preprint arXiv:1702.08423, 2017.
-  H. Yang, D. Huang, Y. Wang, and A. K. Jain, “Learning face age progression: A pyramid architecture of gans,” arXiv preprint arXiv:1711.10352, 2017.
-  C. N. Duong, K. G. Quach, K. Luu, M. Savvides et al., “Temporal non-volume preserving approach to facial age-progression and age-invariant face recognition,” arXiv preprint arXiv:1703.08617, 2017.
-  S. Zhou, W. Zhao, J. Feng, H. Lai, Y. Pan, J. Yin, and S. Yan, “Personalized and occupational-aware age progression by generative adversarial networks,” arXiv preprint arXiv:1711.09368, 2017.
-  A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb, “Learning from simulated and unsupervised images through adversarial training,” arXiv preprint arXiv:1612.07828, 2016.
-  Z. Li, Y. Hu, M. Zhang, M. Xu, and R. He, “Protecting your faces: Meshfaces generation and removal via high-order relation-preserving cyclegan,” in ICB, 2018.
-  X. Wu, R. He, Z. Sun, and T. Tan, “A light cnn for deep face representation with noisy labels,” arXiv preprint arXiv:1511.02683, 2015.
-  B.-C. Chen, C.-S. Chen, and W. H. Hsu, “Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset,” IEEE TMM, vol. 17, no. 6, pp. 804–815, 2015.
-  K. Ricanek and T. Tesafaye, “Morph: A longitudinal image database of normal adult age-progression,” in FGR, 2006.
-  A. Lanitis, C. J. Taylor, and T. F. Cootes, “Toward automatic simulation of aging effects on face images,” IEEE TPAMI, vol. 24, no. 4, pp. 442–455, 2002.
-  H. Yang, D. Huang, Y. Wang, H. Wang, and Y. Tang, “Face aging effect simulation using hidden factor analysis joint sparse representation,” IEEE TIP, vol. 25, no. 6, pp. 2493–2507, 2016.
-  “Face transformer (ft) demo.” http://cherry.dcs.aber.ac.uk/transformer/.
-  S. Liu, Y. Sun, D. Zhu, R. Bao, W. Wang, X. Shu, and S. Yan, “Face aging with contextual generative adversarial nets,” in ACM MM, 2017.