Lifespan Age Transformation Synthesis

03/21/2020 ∙ by Roy Or-El, et al. ∙ 0

We address the problem of single photo age progression and regression-the prediction of how a person might look in the future, or how they looked in the past. Most existing aging methods are limited to changing the texture, overlooking transformations in head shape that occur during the human aging and growth process. This limits the applicability of previous methods to aging of adults to slightly older adults, and application of those methods to photos of children does not produce quality results. We propose a novel multi-domain image-to-image generative adversarial network architecture, whose learned latent space models a continuous bi-directional aging process. The network is trained on the FFHQ dataset, which we labeled for ages, gender, and semantic segmentation. Fixed age classes are used as anchors to approximate continuous age transformation. Our framework can predict a full head portrait for ages 0-70 from a single photo, modifying both texture and shape of the head. We demonstrate results on a wide variety of photos and datasets, and show significant improvement over the state of the art.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 15

page 16

page 17

page 19

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Age transformation is a problem of synthesizing a person’s appearance in a different age while preserving their identity. Once the age gap between the input and the desired output is significant, e.g., going from 1 to 15 year old, the problem becomes highly challenging due to pronounced changes in head shape as well as facial texture. Solving for shape and texture together remains an open problem. Particularly, if the method is required to create a lifespan of transformations, i.e., for any given input age, the method should synthesize a full span of 0–70 ages (rather than binary young-to-old transformations). In this paper, we aim to enable exactly that—lifespan of transformations from a single portrait. State of the art methods [43, 1, 48, 31, 46, 44, 13] focus on either minor age gaps, or mostly on adults to elderly progression, as a large part of the aging transformation for adults lies in the texture (rather than shape), e.g., adding wrinkles. The method of Kemelmacher-Shlizerman et al. [20] allows substantial age transformations but it can be applied only on a cropped face area, rather than a full head, and cannot be modified to allow backward age prediction (adult to child) due to optical-flow-based nature of the method. Apps like FaceApp allow considerable transitions from adult to child and vice versa, but similar to state of the art methods they focus on texture, not shape, and thus produce sub-par results, in addition to focusing only on the binary case of two ambiguous age classes (“young”, “old”). Theoretically, since time is a continuous variable, lifespan age transformation, e.g., 0–70 synthesis, should be modeled as a continuous process. However, it can be very difficult to learn without large datasets of identity-specific ground truth (same person captured over their lifespan). Therefore, we approximate this continuous transformation by representing age with a fixed number of anchor classes in a multi-domain transfer setting. We represent age with six anchor classes: three for children ages 0–2, 3–6, 7–9, one for young people 15–19, one for adults 30–39, and one for 50–69. Those classes are designed to learn geometric transformation in ages where most prominent shape changes occur, while covering the full span of ages in the latent space. To that end, we propose a new multi-domain image-to-image conditional GAN architecture (Fig. 2

). Our main encoder—the identity encoder—encodes the input image to extract features associated with the person’s identity. Next, unlike other multi-domain approaches, the various age domains are each represented by a unique distribution. Given a target age, it is assigned an age vector code sampled from the appropriate distribution. The age code is sent to a mapping network, that maps age codes into a unified, learned latent space. The resulting latent space approximates continuous age transformations. Our decoder then fuses the learned latent age representation with the identity features via StyleGAN2’s 

[19] modulated convolutions. Disentanglement based domain transfer approaches such as MUNIT [15] and FUNIT [27] can learn shape and texture deformation, e.g., transform cats to dogs. However, these methods cannot be directly applied to transform age, in a multi-domain setting, due to key limiting assumptions: MUNIT requires two generators per domain pair, thus training it for even 6 age classes will require 30 generators, trumping scalability. FUNIT requires an exemplar image of the target class and is not guaranteed to apply only age features from the exemplar, as other attributes like skin color, gender and ethnicity may also be transferred. On the other hand, multi-domain transfer algorithms such as StarGAN [7] and STGAN [26] assume the domains to be distinct and encompassing contrasting facial attributes. Age domains are highly correlated however, and thus those algorithms struggle with the age transformation task. Methods like InterFaceGAN [37] aim to address that via latent space traversal of an unconditionally trained GAN. However, navigating these paths to transform a person into a specific age is difficult, as the computed traversal path does not always preserve identity characteristics. In contrast, our proposed algorithm can transform shape and texture across a wide range of ages while still maintaining the person’s identity and attributes. Another limiting factor in modeling full lifespan age transformations is that existing face aging datasets contain a very limited amount of babies and children. To compensate for that, we labeled the FFHQ dataset [18]

for gender and age via crowd-sourcing. In addition, for each image we extracted face semantic maps as well as head pose angles, glasses type and eye occlusion score. Qualitative and quantitative evaluations show that our method outperforms state-of-the-art aging algorithms as well as multi-domain transfer and latent space traversal methods applied on the face aging task. The key contributions of this paper are: 1) enabling both shape and texture transformations for lifespan age synthesis, 2) novel multi-domain image-to-image translation GAN architecture, 3) labelled FFHQ 

[18] dataset which we will share with the community.

2 Related Work

Early works in age progression have focused on building separate models for specific sub-effects of aging, e.g., wrinkles [45, 3, 2], cranio-facial growth [33, 34], and face sub-regions  [40, 39]. Complete face transformation was explored via calculating average faces of age clusters and transitioning between them [4, 36], wavelet transformation [42], dictionary learning [38], factor analysis [47] and AAM face parameter fitting [22]. Age progression of children was specifically the focus in  [20]

, where the aging process was modelled as cascaded flows between mean faces of pre-computed age clusters of eigenfaces. Recently, deep learning has become the predominant approach for facial aging. Wang

et al. [43] replaced the cascaded flows from [20] with a series of RNN forward passes. Zhang et al. [48] and Antipov et al. [1]

proposed autoencoder GAN architecture where aging was performed by adding an age condition to the latent space. Duong

et al. [32, 31]

introduced a cascade of restricted Boltzmann machines and ResNet based probabilistic generative models (respectively) to carry out the aging process between age groups. Yang

et al. [46] proposed a GAN based architecture with a pyramidal discriminator over age detection features of the generated aged image. Liu et al. [28] introduced additional age transition discriminator to supervise the aging transitions between the age clusters. Li et al. [25] fused the outputs from global and local patch generators to synthesize the aged face. Wang et al. [44] added facial feature loss as well as age classification loss to enforce the output image to have the same identity while still progressing the age. Liu et al. [29] added gender and race attributes to their GAN architecture to help avoid biases in the training data, they also propose a new wavelet based discriminator to improve image quality. He et al. [13] encoded personalized aging basis and apply specific age transforms to create an age representation used to decode the aged face. The focus of most of those approaches was on aging adults to elderly (mostly texture changes). Our method is the first to propose a full lifespan aging, 0–70 years old. We refer the reader to these excellent surveys [9, 10, 35] for a broader overview of the advances in age progression over the years. Recent success with generative adversarial networks [11] significantly improved image-to-image translations between two domains, with both paired [16] (Pix2Pix) and unpaired [50] (CycleGAN) training data. More recent methods disentangled the image into style and content latent spaces, e.g., MUNIT [15], DRIT [24], share the content space but create multiple disjoint style latent spaces. These methods are hard to scale to a large number of domains as they require training two generators per pair of domains. FUNIT [27] used a single generator that disentangled the image into shared content and style latent spaces, however, it required an additional target image to explicitly encode the style. One may consider aging effects as “style”. However when transferring style between two age domains, non-age related styles, like skin color, gender and ethnicity, might be transferred as well. Multi-domain transfer algorithms like StarGAN [8] and STGAN [26] can edit multiple facial attributes, but those are assumed to be distinct and contrasting. StarGAN generalizes CycleGAN to map an input image into multiple domains using a single generator. STGAN uses a selective transfer units with encoder-decoder architecture to select and modify encoded features for attribute editing. These methods however are not designed to work on the age translation task, as aging domains are highly correlated and not distinct. Our proposed architecture enables translations between highly correlated domains, and obtains a continuous traversable age latent space while maintaining identity and image quality.

3 Algorithm

3.1 Overview

Our main goal is to design an algorithm that can learn the head shape deformation as well as appearance changes across a wide range of ages. Ideally, one would turn to supervised learning to tackle this problem. However, since this process is continuous in nature, it requires a large amount of aligned image pairs of the same person at different ages that will span all possible transitions. Unfortunately, there are no existing large-scale datasets that capture aging changes over more than several years, let alone an entire life span. Furthermore, small scale datasets like FGNET 

[22] capture subjects in different poses, environments and lighting conditions, making supervised training very challenging. To this end, we turn to adversarial learning and leverage the recent progress in unpaired image-to-image translation GAN architectures [41, 50, 7, 15, 24, 27]. We propose to approximate the continuous aging process with six anchor age classes which results in a multi-domain transfer problem. We propose a novel generative adversarial network architecture that consists of a single conditional generator and a single discriminator. The conditional generator is responsible for transitions across age groups, and consists of three parts: identity encoder, a mapping network, and a decoder. We assume that while a person’s appearance changes with age their identity remains fixed. Therefore, we encode age and identity in separate paths. Each age group is represented by a unique pre-defined distribution. When given a target age, we sample from the respective age group distribution and assign it a vector age code. The age code is sent to a mapping network, that maps it into a learned unified age latent space. The resulting latent space approximates continuous age transformations. The input image is processed separately by the identity encoder to extract identity features. The decoder takes the mapping network output, a target age latent vector, and injects it to the identity features using modulated convolutions, originally proposed in StyleGAN2 [19]

. During training, we use an additional age encoder to relate between real and generated images to the pre-defined distributions of their respective age class. For transformation to an age not represented in our anchor classes, we calculate age latent codes for its two neighboring anchor classes and perform linear interpolation to get the desired age code as input to the decoder.

Figure 2: Algorithm overview.

3.2 Framework

Our algorithm takes a facial image and a target age cluster as inputs. It then generates an output image of the same person at the desired age cluster. Figure 2 shows the model architecture as well as the training scheme. As a pre-processing step, background and clothing items are removed from the image using its corresponding semantic mask, which is part of our dataset (see Sec. 4 for details). Our age input space, , is represented by a element vector where is the number of age classes. When the input age class is , we generate a vector as

(1)

where is a element vector that contains ones on elements through and zeros elsewhere, and

is the identity matrix. A single generator is used to generate all target ages. Our generator consists of an identity encoder, a latent mapping network and a decoder. During training we also use an age encoder to embed both real and generated images into the age latent space.

Identity encoder. The identity encoder takes an input image

and extracts an identity features tensor

, where . these features contain information about the image local structures and the general shape of the face which play a key role in generating the same identity. The identity encoder contains two downsampling layers followed by four residual blocks [12]. Mapping network. The mapping network embeds an age input vector to the unified age latent space , , where is an 8 layer MLP network and is a 256 element latent vector. The mapping network learns an optimal age latent space that enables a smooth transition and interpolation between age clusters, needed for continuous age transformations. Decoder. Our decoder takes an age latent code along with identity features and produces an output image . The identity features are processed by styled convolution blocks [18]. To reduce water droplet [19] artifacts, we replace the AdaIN normalization layers [14] with modulated convolution layers proposed in StyleGAN2 [19]. In addition, each modulated convolution layer is followed by a pixel norm layer [17] as we observed it further helps reducing these artifacts. We omit the noise injection in our implementation. Overall, we use four styled convolution layers to manipulate the identity code and two upsampling styled convolution layers to produce an image at the original size. The overall generator mapping from an input image and an input target age vector to an output image is:

(2)

Age encoder. The age encoder enforces a mapping of the input image into its correct location in the age vector space . It produces an age vector that corresponds to the source age cluster of the image . The age encoder needs to capture more global data in order to encode the general appearance, regardless of the identity. To this end, we follow the architecture of MUNIT [15]’s style encoder with four downsampling layers, followed by global averaging and a fully connected layer to produce an age vector. Note that the age encoder is not used for inference. Discriminator. We use the StyleGAN discriminator [18]

with minibatch standard deviation. We modify the last fully connected layer to have

outputs in order to discriminate multiple classes as suggested by Liu et al. [27]. For a real image from class , we only penalize the -th output. Respectively, only the -th output is penalized for a generated image of class .

3.3 Training Scheme

An overview of the training scheme can be seen in Figure 2. To compensate for imbalances between age clusters, in each training iteration, we first sample a source cluster and a target cluster (). Then we sample an image from each class. We then perform three forward passes:

(3)

Here, is the generated image at target age and is the reconstructed image at source age . We also apply a cycle to reconstruct at source age from generated image at age

. These passes provide us with all the necessary signals to minimize the following loss functions.

Adversarial loss. We use an adversarial loss conditioned on the source and target age cluster of the real and fake images respectively,

(4)

where is the -th output of the discriminator, is the source age cluster of the real image and is the target cluster for the generated image. Self reconstruction loss. This loss is used to force the generator to learn the identity translation. When the given target age cluster is the same as the source cluster, we minimize

(5)

Cycle loss. To help identity preservation as well as a consistent skin tone we employ the cycle consistency loss [50],

(6)

Identity feature loss. To make sure the generator keeps the identity of the person throughout the aging process, we minimize the distance between the identity features of the original image and those of the generated image,

(7)

Age vector loss. We enforce a correct embedding of real and generated images to the input age space by penalizing the distance between the age encoder outputs and the age vectors that were sampled to generate outputs at the source and target age clusters respectively. The loss is defined as

(8)

The overall optimization function is

(9)

3.4 Implementation details

We train 2 separate models, one for males and one for females. Each model was trained with a batch size of 12 for 400 epochs on 4 GeForce RTX 2080 Ti GPUs. We use the Adam optimizer 

[21] with , and a learning rate of . The learning rate is decayed by 0.5 after 50 and 100 epochs. Similar to StyleGAN [18], we apply the non-saturating adversarial loss [11] with regularization [30]. In addition, we also reduce the learning rate of the mapping network by a factor of and employ exponential moving average for the generator weights. We set , , , . We refer the readers to the supplementary material for architecture details of each component in our framework. The code and pre-trained models will be released to the community.

4 Dataset

We introduce a new facial aging dataset based on images from FFHQ [18], ‘FFHQ-Aging’. We used the Figure-Eight111https://www.figure-eight.com/ crowd-sourcing platform to annotate gender and age cluster for all images on FFHQ, collecting 3 judgements for each image. We defined 10 age clusters that capture both geometric and appearance changes throughout a person’s life: 0–2, 3–6, 7–9, 10–14, 15–19, 20–29, 30–39, 40–49, 50–69 and 70+. We trained a DeepLabV3 [6] network on the CelebAMask-HQ [23] dataset and used the trained model to extract a 19 label face semantic maps for all 70K images. Finally, we used the Face++222https://www.faceplusplus.com/ platform to get the head pose angles, glasses type (none, normal, or dark) and left and right eye occlusion scores. We use the same alignment procedure as [17] with a slightly larger crop size (see supplementary for details). We generated our images and semantic maps at a resolution of 256x256 but the procedure is applicable to higher resolutions too. Figure 3 shows sample image & face semantics pairs from the dataset. There are 32,170 males and 37,830 females in the dataset. The dataset will be released to the research community. For the purpose of training our network, we assigned images 0–68,999, for training and images 69,000–69,999 for testing. Then, we pruned images with: gender confidence below , age confidence below , head yaw angle greater than , head pitch angle greater than , dark glasses label, and eye occlusion score greater than 90 and 50 for eye pair. After pruning, we selected 6 age clusters to train on: 0–2, 3–6, 7–9, 15–19, 30–39, 50–69. This process resulted in 14,232 male and 14,066 female training images along with 198 male and 205 female images for testing.

Figure 3: FFHQ-Aging dataset. We label 70k images from FFHQ dataset [18] for gender and age via crowd-sourcing. In addition, for each image we extracted face semantic maps as well as head pose angles, glasses type and eye occlusion score.
Input FaceApp Old FaceApp Cool Old Ours 50–69 Input FaceApp Young2 Ours 15–19 Ours 0–2
Figure 4: Comparison with FaceApp filters. Note that FaceApp cannot deform the shape of the head or generate extreme ages, e.g. 0–2.

5 Evaluation

5.1 Comparison with Commercial Apps

We perform a qualitative comparison with the outputs of FaceApp333https://www.faceapp.com/. FaceApp provides binary facial aging filters to make people appear younger or older. Figure 4 shows that although the FaceApp output image quality is high, it cannot perform shape transformation and is mostly limited to skin texture. For transformations to an older age, we applied both “old” and “cool old” filters available in FaceApp and compared against our output for 50–69 age range. For transformations to a younger age, we applied the “young2” filter which is roughly equivalent to our 15–19 class. We also show our outputs for the 0–2 class to demonstrate our algorithm’s ability to learn head deformation. Even though FaceApp applies a dedicated filter for each transition, in contrast with our multi-domain generator, its age filters are still not transforming the shape of the head.

5.2 Comparison with Age transformation methods

We compare our algorithm to three state-of-the-art age transformation methods: IPCGAN [44], Yang et al. [46], referred as PyGAN, and S2GAN [13]. Qualitative Evaluation. We compare with PyGAN and S2GAN on CACD dataset [5] in Figure 5 and Figure 6 respectively, on the images showcased by the authors in their papers. We train on FFHQ and test on CACD, while both PyGAN and S2GAN train on CACD dataset. Even though PyGAN is trained with a different generator to produce each age cluster, our network is still able to achieve better photorealism for multiple output classes with a single generator. In comparison to S2GAN, our algorithm is able to create more pronounced wrinkles and facial features as the age progresses, all while spanning wider range of age transformations. We also evaluate our performance w.r.t IPCGAN trained on both CACD & FFHQ-Aging datasets in Figure 7. Here, we use IPCGAN’s publically available code and retrain their framework on FFHQ-Aging dataset for fair comparison (termed as ‘IPCGAN-retrained’). Our method outperforms both IPCGAN models in terms of image quality and shape deformation.

Figure 5: Comparison w.r.t. PyGAN [46]. Our method produces superior results in terms of photorealism and the span of possible age transformations compared to PyGAN, while using a single generator. Note that the 40–49 class outputs are a result of latent interpolation, this age class was not used during training.
Figure 6: Comparison w.r.t. S2GAN [13]. We are able to produce sharper wrinkles for older classes as well as more juvenile looking faces for the 15–19 age class. Note that the 40–49 class outputs are a result of latent interpolation, this age class was not used during training.
Figure 7: Comparison w.r.t. IPCGAN [44] on the FFHQ-Aging dataset. Left: our method. Middle: IPCGAN trained on CACD. Right: IPCGAN trained on FFHQ-Aging. The proposed framework outperforms IPCGAN, producing sharper, more detailed and more realistic outputs across all age classes.

User Study In addition, we performed a user study to evaluate PyGAN results vs. our results. In the study we measure: (a) how well does the method preserve the identity of the person in the photo, (b) how close is the perceived age to the target age, and (c) overall which result is better. Our hypothesis was that PyGAN will excel in identity preservation but not on the other metrics, since PyGAN tends to keep the results close to the input photo (and thus cannot perform large age changes). To measure identity preservation, we show the input and output photos and ask if the two contain the same person. To measure age accuracy, we show the output photo and ask the age of the person, selected from a list of age ranges. To measure overall quality, we show an input photo, and below it a PyGAN result and our result side-by-side in a randomized order, and ask which result is a better version of the input person in the target age range. We used Amazon Mechanical Turk to collect answers for 20 randomly selected images from FFHQ-Aging dataset, repeating each question 5 times, for a total of 500 unique answers. We show the user study interface in supplemental material.

Age range: 50–69
PyGAN [46] Ours
Same identity 19 13
Age difference 23.1 6.9
Overall better 4 16
Table 1: User study results vs. PyGAN [46]. PyGAN is expectedly better at identity preservation, at the cost of not generating the target age (mean age difference 23.1, compared to our 6.9). When asked which is better overall, users prefered our results in 16 out of 20 cases.
Age range: 15–19 30–39 50–69 All
IPCGAN Ours IPCGAN Ours IPCGAN Ours IPCGAN Ours
Same identity 50 50 50 45 50 41 150 136
Age difference 19.3 12.7 20.0 11.6 28.4 9.8 22.6 11.3
Overall better 9 40 8 42 10 38 27 120
Table 2: User study results vs. IPCGAN [44] for three age groups. IPCGAN is expectedly better at identity preservation, at the cost of not generating the target age (mean age difference 22.6, compared to our 11.3). When asked which is better overall, users prefered our results in 120 out of 150 cases.

User study results are presented in creftypecap 1. As expected, PyGAN preserves subject identity more often (in 19 out of 20 cases, compared to 13 for our method). This comes at a cost of much larger age gaps: the perceived age of PyGAN results is on average 23.1 years away from the target age, compared to 6.9 years for our results. Since identity preservation and age preservation may conflict, we also asked participants to evaluate which result is better overall. For 16 out of 20 test photos, our results were rated as better than PyGAN. In a second user study, we compare our results to those of IPCGAN trained on CACD dataset. We report results per age range, as well as overall results (creftypecap 2). We collected answers for 3 age ranges, 50 randomly selected images per range, repeating each question 3 times, for a total of 2250 unique answers. Similarly to PyGAN, IPCGAN better preserves identity (in 100% of the cases) at the cost of age inaccuracies (results are on average 22.6 years away from the target age). When asked which result is better overall, participants picked our results in 120 cases, compared to 27 for IPCGAN.

StarGAN

STGAN

Ours

StarGAN

STGAN

Ours

Figure 8: Comparison with multi domain transfer methods. The leftmost column is the input, followed by transformations to age classes 0–2, 3–6, 7–9, 15–19, 30–39 and 50–69 respectively. Multi domain transfer methods struggle to model the gradual head deformation associated with age progression. Our method also produces better images in terms of quality and photorealism, while correctly modeling the growth of the head compared to StarGAN [7] and STGAN [26].

Comparison with Multi-class Domain transfer methods To validate our claim that multi-domain transfer methods struggle with shape deformations, we compare our algorithm against 2 state-of-the-art baselines, StarGAN [7] and STGAN [26]. We retrain both algorithms on our FFHQ-Aging dataset using the same pre-processing procedure (see Sec. 3.2) to mask background and clothes and the same sampling technique to compensate for dataset imbalances (see Sec. 3.3). Figure 8 shows that although STGAN occasionally deforms the shape for the 0–2 class, both StarGAN and STGAN cannot produce a consistent shape transformation across age classes.

[37]

Ours
[37]

Ours
[37]

Ours

[49] & [37]

Ours
Input 0-2 3-6 7-9 15-19 30-39 50-69
Figure 9: Comparison with InterFaceGAN [37]. Age cluster legend only applies to our method. Rows 1,3,5: results on StyleGAN generated images. Row 7: result on a real image, embedded into the StyleGAN latent space using LIA [49]. Rows 2,4,6: Our result on StyleGAN generated images. Bottom row: Our result on a real image. Existing state-of-the-art interpolation methods cannot maintain the identity (rows 1,3,5,7), gender (row 5) and ethnicity (rows 3,5) of the input image. In addition, as seen in rows 1 & 3, using the same values on different photos produces different age ranges.

Latent space interpolation We show our method’s ability to generalize and produce continuous age transformations by interpolation in the latent space. Interpolation between two neighboring age classes is done by generating two age latent codes , where are adjacent age classes, and then generating the desired interpolated code . The rest of the process is identical to Sec. 3. We compare our results on two possible setups. In the first case, we demonstrate that the StyleGAN [18] latent space paths found by InterFaceGAN [37] cannot maintain identity, gender and race. We sample a latent code in space of StyleGAN, which generates a realistic face image. We use the latent space boundary from InterFaceGAN, which can change age by preserving gender, and edit the sampled latent code to produce both younger and older versions of the face using in space. In the second setup, we compare against real images embedded into StyleGAN’s latent space using the LIA [49] framework. We then change the age of the embedded face by traversing on the latent space across the age boundary learnt with InterFaceGAN on space (this boundary was not gender conditioned). We use in space to generate younger and older versions. In Figure 9 we can see that despite the excellent photorealism of InterFaceGAN on generated faces, the person’s identity is lost, and in some occasions the gender is lost too. In addition, note how InterFaceGAN requires different values for each input in order to achieve full lifespan transformation, as opposed to our consistent age outputs in the traversal paths.

Figure 10: Limitations. Our network struggles to generalize extreme poses (top row), removing glasses (left column), removing thick beards (bottom right) and occlusions (top left).

5.3 Limitations

While our network can generalize age transformations, it has limitations in generalizing for other potential cases such as extreme poses, removing glasses and thick beards when rejuvenating a person, and handling occluded faces. Figure 10 shows a representative example for each such case. We suspect that these issues stem from a combination of using just two downsampling layers in the identity encoder and the latent identity loss. The former creates relatively local feature maps, while the latter enforces the latent identity spatial representations of two difference age classes to be the same, which in turn, limits the network’s ability to generalize for these cases.

6 Conclusions

We presented an algorithm that produces reliable age transformations for ages 0–70. Unlike previous approaches, our framework learns to change both shape and texture as part of the aging process. The proposed architecture and training scheme accurately generalize age, and thus, we can produce results for ages never seen during training via latent space interpolation. In addition, we also introduced a new facial dataset that can be used by the vision community for various tasks. As demonstrated in our experiments, our method produces state-of-the-art results.

Acknowledgements

We wish to thank Xuan Luo and Aaron Wetzler for their valuable discussions and advice, and Thevina Dokka for her help in building the FFHQ-Aging dataset. This work was supported in part by Futurewei Technologies. O.F. was supported by the Brown Institute for Media Innovation. Images in figures 56 were licensed from David Hesketh and Getty Images. All other images are creative commons and were taken from the FFHQ dataset.

References

  • [1] Antipov, G., Baccouche, M., Dugelay, J.L.: Face aging with conditional generative adversarial networks. arXiv preprint arXiv:1702.01983 (2017)
  • [2] Bando, Y., Kuratate, T., Nishita, T.: A simple method for modeling wrinkles on human skin. In: 10th Pacific Conference on Computer Graphics and Applications, 2002. Proceedings. pp. 166–175. IEEE (2002)
  • [3] Boissieux, L., Kiss, G., Thalmann, N.M., Kalra, P.: Simulation of skin aging and wrinkles with cosmetics insight. In: Computer Animation and Simulation 2000, pp. 15–27. Springer (2000)
  • [4] Burt, D.M., Perrett, D.I.: Perception of age in adult caucasian male faces: Computer graphic manipulation of shape and colour information. Proceedings of the Royal Society of London. Series B: Biological Sciences 259(1355), 137–143 (1995)
  • [5]

    Chen, B.C., Chen, C.S., Hsu, W.H.: Cross-age reference coding for age-invariant face recognition and retrieval. In: Proceedings of the European Conference on Computer Vision (ECCV) (2014)

  • [6] Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)
  • [7]

    Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

  • [8] Choi, Y., Uh, Y., Yoo, J., Ha, J.W.: Stargan v2: Diverse image synthesis for multiple domains. arXiv preprint arXiv:1912.01865 (2019)
  • [9] Duong, C.N., Luu, K., Quach, K.G., Bui, T.D.: Longitudinal face aging in the wild-recent deep learning approaches. arXiv preprint arXiv:1802.08726 (2018)
  • [10]

    Fu, Y., Guo, G., Huang, T.S.: Age synthesis and estimation via faces: A survey. IEEE transactions on pattern analysis and machine intelligence

    32(11), 1955–1976 (2010)
  • [11] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. pp. 2672–2680 (2014)
  • [12] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
  • [13] He, Z., Kan, M., Shan, S., Chen, X.: S2gan: Share aging factors across ages and share aging trends among individuals. In: The IEEE International Conference on Computer Vision (ICCV) (2019)
  • [14] Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1501–1510 (2017)
  • [15] Huang, X., Liu, M.Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 172–189 (2018)
  • [16]

    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

  • [17] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: International Conference on Learning Representations (2018)
  • [18] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4401–4410 (2019)
  • [19] Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. CoRR abs/1912.04958 (2019)
  • [20] Kemelmacher-Shlizerman, I., Suwajanakorn, S., Seitz, S.M.: Illumination-aware age progression. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014)
  • [21] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  • [22] Lanitis, A., Taylor, C.J., Cootes, T.F.: Toward automatic simulation of aging effects on face images. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(4), 442–455 (2002)
  • [23] Lee, C.H., Liu, Z., Wu, L., Luo, P.: Maskgan: Towards diverse and interactive facial image manipulation. arXiv preprint arXiv:1907.11922 (2019)
  • [24] Lee, H.Y., Tseng, H.Y., Huang, J.B., Singh, M., Yang, M.H.: Diverse image-to-image translation via disentangled representations. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 35–51 (2018)
  • [25] Li, P., Hu, Y., Li, Q., He, R., Sun, Z.: Global and local consistent age generative adversarial networks. arXiv preprint arXiv:1801.08390 (2018)
  • [26] Liu, M., Ding, Y., Xia, M., Liu, X., Ding, E., Zuo, W., Wen, S.: Stgan: A unified selective transfer network for arbitrary image attribute editing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3673–3682 (2019)
  • [27] Liu, M.Y., Huang, X., Mallya, A., Karras, T., Aila, T., Lehtinen, J., Kautz, J.: Few-shot unsupervised image-to-image translation. arXiv preprint arXiv:1905.01723 (2019)
  • [28] Liu, S., Shi, J., Liang, J., Yang, M.H.: Face parsing via recurrent propagation. arXiv preprint arXiv:1708.01936 (2017)
  • [29] Liu, Y., Li, Q., Sun, Z.: Attribute-aware face aging with wavelet-based generative adversarial networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
  • [30]

    Mescheder, L., Geiger, A., Nowozin, S.: Which training methods for gans do actually converge? In: International Conference on Machine learning (ICML) (2018)

  • [31] Nhan Duong, C., Gia Quach, K., Luu, K., Le, N., Savvides, M.: Temporal non-volume preserving approach to facial age-progression and age-invariant face recognition. In: The IEEE International Conference on Computer Vision (ICCV) (2017)
  • [32] Nhan Duong, C., Luu, K., Gia Quach, K., Bui, T.D.: Longitudinal face modeling via temporal deep restricted boltzmann machines. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5772–5780 (2016)
  • [33] Ramanathan, N., Chellappa, R.: Modeling age progression in young faces. In: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. vol. 1, pp. 387–394. IEEE (2006)
  • [34] Ramanathan, N., Chellappa, R.: Modeling shape and textural variations in aging faces. In: Automatic Face & Gesture Recognition, 2008. FG’08. 8th IEEE International Conference on. pp. 1–8. IEEE (2008)
  • [35] Ramanathan, N., Chellappa, R., Biswas, S.: Computational methods for modeling facial aging: A survey. Journal of Visual Languages & Computing 20(3), 131–144 (2009)
  • [36] Rowland, D.A., Perrett, D.I.: Manipulating facial appearance through shape and color. IEEE computer graphics and applications 15(5), 70–76 (1995)
  • [37] Shen, Y., Gu, J., Tang, X., Zhou, B.: Interpreting the latent space of gans for semantic face editing. arXiv preprint arXiv:1907.10786 (2019)
  • [38] Shu, X., Tang, J., Lai, H., Liu, L., Yan, S.: Personalized age progression with aging dictionary. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3970–3978 (2015)
  • [39] Suo, J., Chen, X., Shan, S., Gao, W., Dai, Q.: A concatenational graph evolution aging model. IEEE transactions on pattern analysis and machine intelligence 34(11), 2083–2096 (2012)
  • [40] Suo, J., Zhu, S.C., Shan, S., Chen, X.: A compositional and dynamic model for face aging. IEEE Transactions on Pattern Analysis and Machine Intelligence 32(3), 385–401 (2010)
  • [41] Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation. ICLR (2017)
  • [42] Tiddeman, B., Burt, M., Perrett, D.: Prototyping and transforming facial textures for perception research. IEEE computer graphics and applications 21(5), 42–50 (2001)
  • [43] Wang, W., Cui, Z., Yan, Y., Feng, J., Yan, S., Shu, X., Sebe, N.: Recurrent face aging. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2378–2386 (2016)
  • [44] Wang, Z., Tang, X., Luo, W., Gao, S.: Face aging with identity-preserved conditional generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7939–7947 (2018)
  • [45] Wu, Y., Thalmann, N.M., Thalmann, D.: A plastic-visco-elastic model for wrinkles in facial animation and skin aging. In: Fundamentals of Computer Graphics, pp. 201–213. World Scientific (1994)
  • [46] Yang, H., Huang, D., Wang, Y., Jain, A.K.: Learning face age progression: A pyramid architecture of gans. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
  • [47] Yang, H., Huang, D., Wang, Y., Wang, H., Tang, Y.: Face aging effect simulation using hidden factor analysis joint sparse representation. IEEE Transactions on Image Processing 25(6), 2493–2507 (2016)
  • [48] Zhang, Z., Song, Y., Qi, H.: Age progression/regression by conditional adversarial autoencoder. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
  • [49] Zhu, J., Zhao, D., Zhang, B.: Lia: Latently invertible autoencoder with adversarial learning. arXiv preprint arXiv:1906.08090 (2019)
  • [50] Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: The IEEE International Conference on Computer Vision (ICCV) (2017)

Appendix 0.A Networks Architecture

Our framework consists of a generator, which contains the identity encoder, mapping network and the decoder, an age encoder and a discriminator. We describe the architecture of each component below. Identity encoder. The identity encoder contains a convolution layer that processes the input image. That layer is followed by two

2-strided convolution layers that downsample the feature maps and four residual blocks 

[12] that produce the final identity features. Each convolution layer is followed by Pixel-norm [17]

, which we empirically found to produce less artifacts than Instance-norm, and ReLU activation. We applied equalized learning rate 

[17] for each convolution layer. creftypecap 7 shows the Identity encoder architecture. Mapping network. The mapping network is an 8 layer MLP network. It takes a input age code vector, where is the number of age classes, and outputs a 256 element age latent code. The input is first normalized with Pixel-norm [17]. Each fully connected layer is followed by a Leaky-ReLU activation and Pixel-norm. We omit the Leaky-ReLU activation for the last layer. We applied equalized learning rate [17] for each fully connected layer. The mapping network architecture can be seen in creftypecap 7. Decoder. Our decoder contains six styled convolution blocks [18] where we use bilinear upsampling in the last two blocks to return to the original image resolution. To reduce droplet artifacts, we replace each convolution + AdaIN [14] combination with a modulated convolution block proposed in StyleGAN2 [19], omitting the noise input. Each modulated convolution layer is followed by a Leaky-ReLU activation and Pixel-norm, which we found to further help in reducing the droplet artifacts. The last layer is a convolution that maps the final features of each pixel to RGB values. Equalized learning rate is used in all convolution blocks. Details of the decoder architecture are summarized in creftypecap 7. Age encoder. The age encoder has a convolution that takes the input image. It is followed by four 2-strided convolution layers that downsample the feature maps, and a convolution, that produces a feature map with output channels. A global average pooling is then applied to generate the age code vector. Each convolution layer, except for the last one, has a Leaky-ReLU activation. We don’t use normalization in the age encoder. Equalized learning rate [17] was applied to each convolution layer. The full age encoder architecture can be found in creftypecap 7.

max width= Layer Stride Act. Norm Output Shape Input Conv. 1 ReLU Pixel Conv. 2 ReLU Pixel Conv. 2 ReLU Pixel Res. Block 1 ReLU Pixel Res. Block 1 ReLU Pixel Res. Block 1 ReLU Pixel Res. Block 1 ReLU Pixel
Table 3: Identity encoder architecture.
max width= Layer Act. Norm Output Shape Age code Pixel Linear LReLU Pixel Linear LReLU Pixel Linear LReLU Pixel Linear LReLU Pixel Linear LReLU Pixel Linear LReLU Pixel Linear LReLU Pixel Linear Pixel
Table 4: Mapping network architecture.

max width= Layer Act. Norm Output Shape Idenity Features Styled Conv. LReLU Pixel Styled Conv. LReLU Pixel Styled Conv. LReLU Pixel Styled Conv. LReLU Pixel Styled Conv. LReLU Pixel Upsamle Styled Conv. LReLU Pixel Upsamle Conv. Tanh max width= Layer Stride Act. Output Shape Input Conv. 1 LReLU Conv. 2 LReLU Conv. 2 LReLU Conv. 2 LReLU Conv. 2 LReLU Conv. 1 Global Pooling
Table 5: Decoder architecture.
Table 6: Age encoder architecture.
max width= Layer Act. Norm Output Shape Input Conv. LReLU Conv. LReLU Conv. LReLU Downsample Conv. LReLU Conv. LReLU Downsample Conv. LReLU Conv. LReLU Downsample Conv. LReLU Conv. LReLU Downsample Conv. LReLU Conv. LReLU Downsample Conv. LReLU Conv. LReLU Downsample Minibatch Stdev. Conv. LReLU Conv. LReLU
Table 7: Discriminator architecture.

Discriminator. We use the StyleGAN discriminator [18] architecture with minibatch standard deviation [17]. The first layer is a convolution layer that generates a 64 channel feature map for each input pixel. This is followed by twelve convolution layers [18], we downsample the feature map after every other block (6 times overall). After that we apply minibatch discrimination followed by a convolution block and convolution block with output channels in order to discriminate multiple classes as suggested by Liu et al. [27]. Leaky ReLU activations and Equalized learning rate are used in all convolution layers. We do not use normalization in the discriminator. creftypecap 7 shows the detailed discriminator architecture.

(a)

max width= Age Class Males Females 0–2 1237 804 3–6 1631 2169 7–9 1005 1234 15–19 930 1957 30–39 5512 5848 50–69 3917 2054

(b)
Figure 11: FFHQ-Aging dataset details. Left: age distributions for males and females for the raw dataset. Right: number of training images for each anchor age class after pruning. The majority of training classes contain more than 1,000 images, which we found sufficient for training our model.

Appendix 0.B FFHQ-Aging Dataset Details

Figure 10(a) shows the age distribution of images in the raw FFHQ-Aging dataset for males and females. creftypecap 10(b) shows the number of training images for each age class after the data cleaning process described in Sec. 4 of the main paper. To align the images, we use the same data alignment technique as Karras et al. [17]

(see Figure 8e in their paper), which was also used to align the original FFHQ dataset. We mirror pad the image boundaries and then blur them. Then, we use the eyes and mouth landmark locations to select an oriented crop area according to

Where are the landmarks for the left and right eyes respectively, are the landmarks for left and right corners of the mouth, ”Normalize” is vector normalization, is the size of the box, and is the center of the cropping box. In order to make sure we obtain the full head that also includes the neck, we took slightly larger crops then the original FFHQ dataset, our scale factor for is 4.4 as opposed to 3.6 which was used in the originally.

Appendix 0.C Continuous Age Transformations

We generated continuous lifespan age transformations by interpolating 24 output images between each neighboring age class anchors. See videos in the project’s website444Lifespan Age Transformation Synthesis Project Website and creftypepluralcap 1817 for the results.

Appendix 0.D Ablation Studies

We performed two ablation studies in order to prove our main claims. In the first study we show the importance of using multiple age classes as anchors in order to learn a latent space that will allow for continuous age transformations. We trained two additional models, one with age classes 0–2 & 50–69 as the only anchors and one with age classes 0–2, 15–19 & 50–69 as the anchors. We then generated full lifespan transformation of 11 images from each model by interpolating missing anchor classes when needed along with interpolating one output image between each two base classes. Figure 12 shows how additional anchor classes are crucial in creating reliable and plausible lifespan age transformations.

2 Classes

3 Classes

6 Classes

(ours)

Input 0-2 3-6 7-9 15-19 30-39 50-69
Figure 12: Anchor classes ablation study. We show latent interpolation on models trained on 2 anchor classes (top row), 3 anchor classes (middle row) and 6 anchor classes (bottom row). Increasing the number of anchor classes greatly improves the framework’s ability to generate high quality age transformations over full lifespan.

In the second study, we examined importance of our design choices in constructing the input age vector code space . We show the connection between the structure of to the ability of the age latent space to span all possible ages. Specifically, we show the importance of using multiple vector elements to represent each age class as well as the importance of adding noise to the one-hot input signal. We trained two additional models on all 6 anchor classes, one with 50 elements per age class, but with no added noise, and one with a single element per age class and no added noise. In Figure 13 we can see that although the anchor classes are always well represented within the latent space, both number of elements per age class and added noise, are important parts to ensure the continuity of and high image quality.

1 Element

No Noise

50 Elements

No Noise

50 Elements

With Noise

Input 0-2 3-6 7-9 15-19 30-39 50-69
Figure 13: Age class representation ablation study. We show latent interpolation on models trained with one-hot representation with 1 element per age class (top row), one-hot representation with 50 elements per age class (middle row) and one-hot representation with 50 elements per age class and added gaussian noise (bottom row). Expanding the number of elements representing each age class allows representation of ages outside the anchor classes. Adding noise, further improves the image quality for interpolated outputs (Zoom in for details).

Appendix 0.E Generalization Ability

To test our framework ability to generalize, we carried out two experiments. In the first experiment, we tested the generalization ability of the identity feature space. We feed the network images from the remaining 4 untrained classes in FFHQ-Aging, 10–14, 20–29, 40–49 & 70+. Figure 14 demonstrates our method’s ability to produce high-quality results for unseen face structures from unseen age classes.

10–14

10–14

20–29

20–29

40–49

40–49

70+

70+

Input Age Input Image 0-2 3-6 7-9 15-19 30-39 50-69
Figure 14: Results on inputs from untrained age classes. Note that masking artifacts are a result of the segmentation process, and were not caused due to our method.

In the second experiment, we tested the generalization ability of the age latent space . We produced outputs for the 3–6 age class by interpolating it as the mid point of 0–2 and 7–9 age classes. We fed the decoder a latent age vector and compare the results with the outputs for the trained 3–6 class. As can be seen in Figure 16, the similarity between the trained results and the interpolated results suggests that the learned age latent space, , is approximately linear w.r.t the target age input which contributes to the ability of the framework to generate results outside of the trained age classes.

Appendix 0.F User Studies

Input Image 0–2 Trained 3–6 Interpolated 3–6 7–9 Input Image 0–2 Trained 3–6 Interpolated 3–6 7–9
Figure 15: Linearity of age latent space. We compare the results of the network outputs for the 3–6 class vs. the network outputs for 3–6 class interpolated as the mid point between the 0–2 and 7–9 age latent vectors . The resemblance of the interpolated results to the trained results suggests that age is spanned quasi-linearly in the latent space.

Figure 16: User study interface. We asked 3 different questions to assess age, identity and overall quality.

Age range: 0–2 3–6 7–9 15–19 30–39 50–69 All [44] Ours [44] Ours [44] Ours [44] Ours [44] Ours [44] Ours [44] Ours Same identity 14 20 19 23 24 24 20 25 24 22 19 23 120 137 Age difference 1.0 3.4 2.1 3.2 4.5 5.1 6.4 10.3 8.2 7.4 13.3 6.5 5.9 6.0 Overall better 2 23 1 24 1 23 2 23 1 24 0 25 7 142 Table 8: User study results vs. IPCGAN [44] that was retrained on our dataset. Our results are better at preserving subject identity, and the two methods are extremely close at age accuracy. Most importantly, when asked which result is better overall, users preferred our results in 95% of the cases (142 out of 150, compared to 7 for IPCGAN and 1 indecisive).
Age range: 0–2 3–6 7–9 15–19 30–39 50–69 All [26] Ours [26] Ours [26] Ours [26] Ours [26] Ours [26] Ours [26] Ours Same identity 16 22 24 25 25 25 25 24 25 24 25 24 140 144 Age difference 4.0 4.4 15.7 6.2 19.8 9.5 17.5 12.3 13.3 7.0 23.1 7.7 15.6 7.8 Overall better 5 20 6 18 3 20 3 20 3 21 1 24 21 123 Table 9: User study results vs. STGAN [26] that was retrained on our dataset. Our results are better at preserving subject identity, and have better age accuracy. Most importantly, when asked which result is better overall, users preferred our results in 82% of the cases (123 out of 150, compared to 21 for STGAN and 6 indecisive).

The user interface of our user studies is presented in creftypecap 16. The same UI was used both for the studies in the main paper and in this supplemental document. In addition to the main paper user studies, we also wanted to verify that our results are not solely due to a better dataset. To this end, we retrained IPCGAN [44] and STGAN [26] on our data. In the following studies, we evaluate the results of 25 randomly selected photos on each of the 6 age classes, repeating each question 3 times, for a total of 2250 individual answers per user study. Note that in these studies we can compare all 6 age groups, whereas in our other user studies we were limited by the choice to use the authors’ pre-trained models which were not available for all ages. User study results are in creftypepluralcap 916. Indeed, we see that even when retrained on our data, there is a significant performance gap between our results and previous works [26, 44]. Our results are better at identity preservation, and either better or on-par in age accuracy. As explained in the main text, since overall quality is determined by both these factors and others such as image quality, we asked users which result is better overall. Our results were selected as better in 82% (vs. StGAN) and 95% (vs. IPCGAN) of the cases.

Figure 17: Full lifespan transformation. Also see supplemental videos.
Figure 18: Full lifespan transformation. Also see supplemental videos.