A human brain can verify kinship from photos by analyzing the disriminative patterns of facial parts. This feature is a strong evidence that how brain fascinatingly complex it is. Recently, an immense number of methods have been proposed to achieve kinship verification by computers, since learning-based deep models have shown impressive powers to extract these latent patterns automatically from faces [22, 18, 8]. In particular, these methods outperform the performance achieved by humans for various identification problem [17, 22]. Ultimately, the outputs of the models can be used for the identification of missing people, child/parent search as well as tracking some statistics for recommendation services.
However, looking the problem in reverse, more intuitively, guessing possible child faces by analyzing their parent photos, is not quite motivated as the original problem in literature (i.e., recognition and verification). To the best of our knowledge, there is also a limited interest to tackle the problem , even if there are several promising methods to synthesize human faces from large-data collections based on generative deep models [12, 23, 3].
In general, the objective of this problem (i.e. ,synthesizing kinship face) is that for the given input of a parent photo (either mother or father), a method synthesizes the most probable faces of a child by exploiting latent facial features exhibited on the parents’ faces. However, the robustness of the models, especially for deep models, strongly depends on the number of training samples and the diversity of the datasets. Moreover, currently available datasets for kinship verification are quite small and models should be regularized based on this limitation so as to achieve perceptually satisfying results.
In this paper, we propose a fully convolutional network (FCN) which transforms a parent face in a latent space with the responses of encoder layers and iteratively decodes these responses to reconstruct a possible kinship face. For this purpose, we present three novel contributions to the standard FCN for kinship face synthesis: 1) We use a pre-trained network for the encoder layers which is optimized for face recognition on a large-scale dataset. Eventually, this allows us to extract more robust hidden features even if limited numbers of faces are modeled for face synthesis. 2) Although use of the encoder layers provides several advantages such as sparsity for person identification, decoder layers can easily overfit to the training data due to the large dimensionality of the hidden features. At the end, it hardens the problem to generalize an optimum solution for diverse face scenarios. Hence, we leverage adversarial loss with large-scale unsupervised data to mitigate the overfitting with its generalization capability. 3) Lastly, we employ cycle-domain transformation (i.e., transforming from parent-to-child as well as child-to-parent) which leads to more stable results.
The paper is organized as follows. First, we review the literature on face synthesis and kinship verification, since these steps are two major basis of our problem. Later, the details of the proposed method are presented for kinship synthesis. Lastly, experimental results are reported and we explain the final remarks of the paper.
2 Related Work
In this section, face synthesis and kinship verification are reviewed in detail, since these are two critical ingredients for an effective kinship synthesis.
. In these works, common characteristics of faces such as eyes, mount and symmetry are particularly enhanced. However, their main limitation is that the solution strictly relies on data (i.e no generalization capacity) and natural image manifold learning (i.e., memorizing) can be stuck to the case that it only transforms image patches from low-resolutions to higher ones by taking averages of all possible solutions at the end. Similarly, autoencoder-based (AE) methods have the similar drawbacks for the solution. aims to generate kinship faces by promoting facial dynamics (i.e., expression) along with visual appearances based on AE, thus it is able to transfer personal expressions to prospective children.
Variational autoencoder (VAE) 
is a probabilistic way of synthesizing images by computing random latent variables according to the input at the encoder layers. Thus, this practically improves the generalization of the models and attains diverse results for various image synthesis problems as well as faces. However, it still lacks to reach the complexity of the problems (i.e., it underestimates the problem with the fixed sized parameters, i.e., mean and variance values). At the end, overly-smoothed results are obtained.
Recently, generative adversarial networks (GAN) [12, 23, 3] yield perceptually impressive results for image generation. In particular, face synthesis can be achieved in an unsupervised manner by incorporating various poses, expressions, genders, skin colors, and hair types. Moreover, it allows users to transform images to different domains by simply conditioning the solution [14, 5]. The superiority of GAN over VAE/AE is explained in  which remarks that GAN preserves the fine-detail solution about the problem, while VAE/AE approximates it roughly.
Kinship Verification: Kinship verification/recognition is initially based hand-crafted shallow facial features by incorporating skin color and/or higher-order gradient patterns exhibited from facial photos [25, 26]. Moreover, use of videos instead of single images is explored  and the authors assert that it can be useful to verify faces with spatio-temporal appearances, implicitly facial expressions.
Recently, deep models attain state-of-the-art performance for the problem [27, 18, 8, 21]. In general, their solutions are based on transferring the trainable parameters from an available face model and finetuning with kinship data due to the scarcity of samples. Lastly, feature space is frequently learned with a triplet loss similar to face recognition problem .
3 Kinship Face Synthesis
For a given parent photo (i.e., after face detector111Dlib library is used for face detection. http://dlib.net/), our objective is to synthesis a child face by exploiting the responses of a Generator network that consists of fully convolutional encoder and decoder layers. To flexibly define different genders for prospective child faces (i.e male or female), is conditioned at the decoder layer by a label formulated as
. This condition is also boosted by an auxiliary classifier as in for more stable results.
Generator Network: Generator network consists of fully convolution layers and it is an AE architecture. It aims to extract representative latent abstracts from an input image and to generate a face based on these latent abstracts. To increase to information flow from the encoder to decoder layers, we employ skip-connections as in . Practically, these connections improve the perceptual quality and stability of the generated faces.
Furthermore, we follow the similar observations about the architecture that are presented in 
for generative network (i.e., normalization layer with leaky-ReLU activation).
In addition, the target face is conditioned by label
at the decoder layers which enables us to define the gender of the generated kinship face based on this label as either male or female. However, for better perceptual distinction, we add a penalty term to the loss function which will be explained in the following section in detail.
Loss Functions: For parameter optimization, first, we adapt the assumption of matching high-level activations of a pre-trained network between the original and generated faces instead of pixel differences [11, 7] (i.e., renowned as content loss). Ultimately, this loss allows us to preserve latent content which can be exhibited from faces (i.e., it enables to transfer high-level distinct facial features from larger receptive fields than individual pixels). In addition, it provides robustness to some cases which contain not perfectly registered faces and/or severe pose variations. You can find further discussions about the assumption in .
The layer activations of VGGFace-16  (i.e., conv4_3 and conv5_3) are used and the loss between a training pair () is minimized as:
Note that by incorporating only higher layer activations in the loss function, we opt to preserve global facial part similarities than fine-details by which synthesis of fine-details is quite difficult by just analyzing images as expected (i.e., some peripheral dependencies).
Moreover, we introduce an auxiliary classifier to condition the gender of faces. This network categories an input face based on a softmax cross-entropy loss as male or female and it propagates the error to the network for Generative and Discriminative layers:
Face Encoder Layers: As mentioned, the main drawback of synthesizing a kinship face is that there is a limited number of training samples compared to the other facial problems/datasets in literature [18, 8]. Moreover, learning methods need sufficiently large and diverse samples to obtain reliable models. Eventually, these issues weaken the generalization power of the methods by overfitting to the limited samples.
Therefore, instead of creating a model from scratch for encoder layer, we replace the layer parameters of the encoder network with a pre-trained model (i.e., VGGFace-16). Note that this model is learned on a large and diverse face dataset . Practically, it enables to extract more discriminative and latent features about faces. Also, it adds robustness against noise and pose variations. Lastly, these parameters are not finetuned during the learning stage.
Remark that use of a pre-trained model has another advantage that the generated kinship faces can automatically transfer the facial expressions of parents from photos. Thus, there is no need to utilize a different loss function or conditional labels for the network (Please see the experimental results).
Adversarial Loss: The Generator network computes large-dimensional hidden abstracts about data. Furthermore, the total number of operations is multiplied when skip-connections are utilized. However, even if rich representations are extracted, the sparsity of the abstracts disturbs the stability and convergence of the parameters to an optimum solution for small image sets. 
explain that mapping large dimensionality to a lower space, in other words, willingly degrading the sparsity with a reduction method (i.e., PCA etc.), can definitely improve the performance of deep convolution methods for various transfer learning problems.
Based on this observation, we improve the generalization capacity and stability of the Generator network with adversarial network scheme trained on a different and larger face dataset. Thus, GAN replaces the reduction method and it acts like a degradation function to obtain indistinguishable faces.
where super-scripts of the losses indicate which parameter sets are updated (Generator (G) or Discriminator (D)). Moreover, (i.e., discriminator) is structured as an autoencoder network and the reconstruction-based loss is utilized. Lastly, is a trainable parameter and of  is set to 0.7 to diversify the generated faces. Note that the network is updated by considering content loss and generative loss simultaneously at the end.
Cycle-Consistency: Lastly, we employ a cycle-domain transformation as in  to achieve more stable results. By this way, a consistent facial transformation can be obtained by linking the generated kinship face by his/her parent face. For parameter optimization, we add an additional cost term similar to Eq. 1:
has shared encoder parameters with while different decoder parameters.
Full Method: Finally, the overall flow of the proposed method is illustrated in Fig. 1. Furthermore, the full objective functions for Generative and Discriminative layers are written respectively as:
where , , and control the influences of the loss functions. Empirically, they are set to 10, 0.1, 0.001 and 0.1 by taking the parameter overfitting into account for small datasets.
Datasets: For kinship synthesis, we use the dataset released for Large-Scale Kinship Recognition Data Challenge and it is called as Families in the Wild (FIW) . This dataset comprises approximately 600K face pairs from 300 families in the training set. Since the labels of test set are not available, we use the validation set (randomly select 20K) to evaluate the performance of the proposed method. Moreover, we use only father-son, father-daughter, mother-son and mother-daughter relations. In addition, we utilize CelebA dataset  to regularize the network which contains 200K celebrity images with 40 different attributes. Note that we exploit only gender attribute of faces in this paper (i.e., male or female).
|KinshipGAN (wo Deep Face)||0.048|
To evaluate the performance, we use NN search accuracy along with the qualitative results which calculates the total number of correct face ids at the top-100 of ranked lists for given synthesis faces to the system. Additionally, fc6 layer activations of VGGFace-16 are used to represent the faces and cosine distance is computed.
Implementation Details: The parameters are optimized by Adam optimizer 
4.1 Experimental Results
Table 1 shows the retrieval accuracy of kinship faces generated by our method on real validation face dataset. Even if the methods do not obtain significant performance for the retrieval (which is not main scope of the method), it can clearly illustrate the effects of the contributions presented in this paper to the kinship face synthesis. In particular, use of a pre-trained face model for the encoder part introduces a noticeable distinguishable power to the model. Furthermore, setting coefficient to higher values (e.g., 1.0) which determines the influence of face similarities, can adversely affect the performance and perceptually worse results can be obtained.
Fig. 2 illustrates the output of the proposed method for father-daughter, father-son, mother-daughter and mother-son relations (You can find additional results on Appendix). Furthermore, the opposite of the gender for each face is also given. From these results, the proposed method yields perceptually promising results on kinship face synthesis. Particularly, the method can preserve the facial expression, pose etc. exhibited from parent photos by the cycle-domain transformation and the pre-trained network.
In this paper, we propose a kinship face generator network that can yield promising results under the scarcity of kinship samples. Throughout the paper, we propose three main contributions. First, in order to extract robust facial features, we exploit a pre-trained deep face model in the network. Later, adversarial scheme is used to improve the generalization capacity of the network and to prevent the overfitting. Lastly, cycle-domain transformation approach is utilized to provide consistency between parent-to-child translation. The experimental results show that the proposed method achieves promising perceptual results.
-  S. Baker and T. Kanade, “Hallucinating faces.” IEEE FG, 2000.
-  D. Berthelot, T. Schumm, and L. Metz, “Began: Boundary equilibrium generative adversarial networks.” arXiv preprint arXiv:1703.1071, 2017.
-  Q. Chen and V. Koltun, “Photographic image synthesis with cascaded refinement networks.” IEEE ICCV, 2017.
-  Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation.” arXiv preprint arXiv:1711.09020, 2017.
-  H. Dibeklioglu, A. A. Salah, and T. Gevers, “Like father, like son: Facial expression dynamics for kinship verification.” 2013.
-  A. Dosovitskiy and T. Brox, “Generating images with perceptual similarity metrics based on deep networks.” NIPS, 2016.
Q. Duan and L. Zhang, “Kinship verification with deep convolutional neural networks.”ACM MM Workshops, 2017.
-  V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville, “Adversarially learned inference.” arXiv preprint arXiv:1606.00704, 2016.
-  I. O. Ertugrul and H. Dibeklioglu, “What will your future child look like? modeling and synthesis of hereditary patterns of facial dynamics.” IEEE FG, 2017.
-  L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks.” IEEE CVPR, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets.” NIPS, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition.” 2016.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization.” arXiv preprint, 2014.
-  D. P. Kingma and M. Welling, “Began: Boundary equilibrium generative adversarial networks.” arXiv preprint arXiv:1312.6114, 2013.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks.” 2012.
-  Y. Li, J. Zeng, J. Zhang, A. Dai, M. Kan, S. Shan, and X. Chen, “Kinnet: Fine-to-coarse deep metric learning for kinship verification.” ACM MM Workshops, 2017.
C. Liu, H.-Y. Shum, and C.-S. Zhang, “A two-step approach to hallucinating faces: global parametric model and local nonparametric model.”IEEE CVPR, 2001.
-  Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild.” IEEE ICCV, 2015.
-  J. Lu, J. Hu, and Y.-P. Tan, “Discriminative deep metric learning for face and kinship verification.” 2017.
-  O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition.” 2015.
-  A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks.” arXiv preprint arXiv:1511.06434, 2015.
-  J. P. Robinson, M. Shao, Y. Wu, H. Liu, T. Gillis, and Y. Fu, “Visual kinship recognition of families in the wild,” TPAMI, 2018.
-  X. Wang and C. Kambhamettu, “Leveraging appearance and geometry for kinship verification.” 2014.
-  H. Yan, J. Lu, W. Deng, and X. Zhou, “Discriminative multimetric learning for kinship verification.” 2014.
-  K. Zhang, Y. Huang, C. Song, H. Wu, and L. Wang, “Kinship verification with deep convolutional neural networks.” BMVC, 2015.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks.” IEEE ICCV, 2017.