Image translation  has been received growing attention in computer vision, which aims to translate images from one domain to another. In this paper, we take face editing as a concrete example of image translation. As shown in Figure 1, the facial age, makeup, and expression of the input face are altered. The input image and output image are also called the source and target image, respectively, indicating they are from source and target domains.
Multi-domain translation refers to performing image translation among multiple domains. For example, in the task of face aging with multiple age groups, each age group can be viewed as a domain. Given a face from an age group, we aim to translate it to any other age group using a single translator. One of the representative works for multi-domain image-to-image translation is StarGAN . Though StarGAN is effective, the translated result of it still suffers from the so-called phenomenon of source retaining, i.e. the output image retaining the characteristics of the source domain. See the first row of Figure 1 for an exemplary case of face aging. When changing a female face from age group to , the result of StarGAN still looks like an adult. In makeup editing shown in the second row of Figure 1, StarGAN fails to eliminate the eye shadows in makeup removing completely. A similar observation can be made for expression editing, as shown in the third row of Figure 1. The results of StarGAN show visible teeth shadows around the mouth area.
A well-trained domain classifier of discriminator in StarGAN is still easily deceived by non-qualified translated samples even though the domain classifier has good recognition accuracy over real data. As shown in the first row of Figure 2, the discriminator correctly judges an adult face to be within age group. Translating the adult face into , the translated face heavily retains adult characteristics, e.g., beard and expression wrinkles (see the second row of Figure 2). However, the discriminator judges it to be within with a confidence of . These synthetic samples with incompatible characteristics are abundant. We argue that the traditional classifier is not sensitive to combinations of incompatible characteristics and do not provide good guidance to the generator for synthesizing conditional samples. Therefore, a classifier that is more difficult to be deceived is demanded, which should own high sensitiveness to the incompatible domain characteristics.
To this end, we propose Untraceable GAN (UGAN) to tackle the source retaining explicitly. Take the face aging task as an example. Suppose we aim to translate a face from the source domain ( years old ) to the target domain ( years old). The images in the source domain are characterized by beard and wrinkles, while those in the target domain have smooth skin and round face. Also, both domains share some common characteristics, like a smile, which are not altered in the face aging process. The goal of UGAN is to erase all the source-only characteristics and inject certain target-only ones.
To achieve this goal, the discriminator judges which domain the synthesized image is translated from. When training , the real images are defined as translated from their ground truth age domains, while the synthesized images are translated from the age domains of their corresponding input images. If the synthesized result produced by the translator still contains the characteristics of adult age, such as beard and wrinkles, is able to easily identify that it is translated from the years. In order to fool into predicting it as the target domain, needs to erase all the source-only characteristics and inject certain target-only characteristics. In this way, the source domain is “untraceable”, following which our method is named. In contrary to UGAN, in traditional GANs, the discriminator distinguishes where an image is sampled from and is easily fooled by the translator who just simply generates certain target-only characteristics of the target domain, and cannot explicitly erase the adult characteristics.
Our contributions can be summarized as follows.
To the best of our knowledge, UGAN is the first work to improve the image translation by explicitly erasing the characteristics of the source domain.
The proposed UGAN distinguishes which domain the image is translated from, and can well handle the common source retraining in image translation problem.
Extensive qualitative and quantitative experiments in three face editing tasks, including facial expression editing, face aging as well as facial makeup generation, demonstrate the superiority of UGAN over state-of-the-art GANs. The source code will be made publicly available.
2 Related Work
Generative Adversarial Network  is a popular generative model that uses an adversarial loss to train generator and discriminator. The generator produces fake data to fool the discriminator , while distinguishes fake data from real data. It has been further improved with the coarse-to-fine strategy [7, 19], normalization of the discriminator or generator [1, 13, 27, 3], advanced adversarial loss [25, 29, 2, 40, 18, 40, 33], etc. In this work, UGAN adopts adversarial loss proposed by WGAN-gp , which approximately minimizes the Wasserstein distance between the synthesized distribution and real distribution.
Conditional GAN  is adopted to model multi-domain image-to-image translation in this work. For conditional GAN, takes condition as one of the input to synthesize data which correspond to the condition. On the other hand,
determines whether the given joint distribution of data and condition are real or fake. The vanilla cGAN directly concatenates the pair of data and conditions, and is trained to distinguish the authenticity of the given pair. Conditions are further concatenated with the hidden features of and in IcGAN . Moreover, an auxiliary classifier is adopted in discriminator of AC-GAN , and log-likelihood of the classifier is maximized on both generated and real data. Miyato et al. proposed cGAN with projection discriminator 
which significantly improves the quality of conditional image generation on ImageNet. In our work, condition, domain, and attribute are equivalently treated. For example, in face aging, an age group is viewed as a domain and also as a condition. Inspired by AC-GAN, we adopt an auxiliary classifier in the discriminator. However, using such a discriminator for image-to-image translation will bring the phenomenon of source retaining. We then change the role of this auxiliary classifier in UGAN and make it classify which source domain the given datum is translated from, instead of classifying which domain the given datum is sampled from as in AC-GAN.
Image-to-Image Translation is first defined in pix2pix  which works well on paired data. Later the performance of image translation is improved from various aspects. Skip connection from bottom to top with addition or multiplication operation is used to maintain useful input image information [38, 24, 32]. Also some works adopt cascade training from coarse to fine to synthesize images with higher quality and resolution [35, 5]. Choi et al.  improved image translation by using extra relevant data for training. Other effective methods include buffer of history fake image , multi-discriminator , 3D technology , variational sampling [42, 10], etc. Since paired data are often unavailable, cycle consistency loss is used to constrain the correspondence on unpaired data in CycleGAN , DualGAN , and DiscoGAN . If the translator only models directed translation between two domains, translators are required among domains. A single conditional translator for multi-domain translation is seriously demanded. Our work focuses on unpaired multi-domain translation with such a single translator, and well tackles the phenomenon of source retaining that frequently happens to the single-translator multi-domain models.
3 Our method
The framework of UGAN is shown in Figure 3. The input image and the target condition are fed into the translator . The discriminator has two heads: one head is named as the authenticity classifier, which is responsible for distinguishing whether the input sample is real or fake; the other is called the source classifier aiming to determine which domain the sample is translated from, where the real data are supposed to be translated from themselves. Translator
is supervised by three loss functions. First,is trained to fool the authenticity classifier of to classify the synthetic image as real. Second, is trained to reconstruct the original input image given the translated image and source domain label. Third, is trained to fool the source classifier of to believe that the synthetic image is translated from the target domain. The discriminator judges that which domain the given sample is translated from. As the adversarial training goes on, we expect the translated image to be fully compliant with the target domain and the source domain to be untraceable via erasing any clues from the source domain.
We then introduce the used mathematical annotations. Discriminator here contains two heads including the authenticity classifier and the source classifier , where and
share a feature extraction module. and are abbreviated as and respectively. is a source domain image-label pair, where represents the image and is its label. By feeding the image and the target label into , it produces . We use to denote the joint distribution of image and domain label . and are the marginal distribution of images and labels.
3.1 Objective Function
In the following, we elaborate on details of each component of UGAN, including authenticity classifier, cycle consistency, source classifier, and the overall loss function.
Authenticity Classifier : The adversarial loss of WGAN-gp  is adopted to constrain the synthetic joint distribution to make it approximate the real distribution. The third term in Eq. (1) is a gradient penalty term that enforces the discriminator as a 1-Lipschitz function.
where , and .
Cycle Consistence: The input and output are regularized to satisfy the correspondence as following  :
For space limit, and are abbreviated as and , respectively.
Source Classifier : To tackle the phenomenon of source retaining, is made to classify which domain image is translated from. For an image-label pair , we regard as translated from domain to domain . Since aims to classify where an image is translated from, the real datum should be classified into , meaning is translated from domain . The synthetic image should be classified into , meaning is translated from . Translator is trained to fool to classify into the . In this way, is trained to make the source domain of is untraceable and the target domain characteristics are injected to . The adversarial training is formulated as follows:
where is the penalty coefficient of source retaining which is a constant.
It is worthy to note that the should be injected with certain target-only characteristics. Recall that in Eq. (4) and (5), is trained to fool to classify into . However, the characteristics of are not pure but mixed with the characteristics of real data sampled from and synthesized data translated from . In order to accurately synthesize the characteristics of the target domain, the number of categories of is augmented as . The first categories are real data and those sampled (translated) from the corresponding domain. The latter categories are fake data, and those translated from the corresponding domain. means input datum is fake and is translated from . In addition, the translator is trained to fool to classify into the category. The adversarial training conducted via optimizing the following:
In this process, is trained to identify whether is a fake image and from which domain it is translated. is trained to approach the truly untraceable translator.
Overall Loss Function: and are trained by optimizing
where could be or . We denote the version adopting as UGAN-S, where “S” means “Simple”. The training details are illustrated in Algorithm 1. We first train the discriminator with Eq. (8) for times. Then the translator is trained with Eq. (9). The two processes are performed alternatively until reaching iterations.
We now explain why phenomenon of source retaining happens and how UGAN effectively handles it. We briefly review one of the most representative multi-domain translation model StarGAN. It also has an authenticity classifier and a cycle consistency constraint . The only difference between StarGAN and UGAN is that the discriminator of StarGAN classifies which domain the real image is sampled from. For StarGAN, the following loss is optimized:
where Eq. (10) trains to distinguish different domains while Eq. (11) trains to fool to classify as . is expected to increasingly fit . To take the authenticity classifier and the cycle consistency constraint into consideration, the overall loss of and of StarGAN is
StarGAN suffers from the source retaining for the following reasons. First, the first two terms in Eq. (13) may result in lazy editing. This is because is a trivial solution to satisfy the cycle consistency loss and adversarial loss terms. Second, in StarGAN is easy to be deceived. is the only term that constrains the translated image towards the target domain. However the value of quickly drops, while still does not look like sampled from and instead like the adversarial example  of . Third, is just like a whitelisting system to inject the characteristics of into , without explicitly erasing the source-only characteristics. That is, StarGAN only knows what characteristics the translated images should contain, and does not know what should not be contained.
Compared with StarGAN, our proposed UGAN has its unique and . can tell is translated from until all source-only characteristics have been erased from . Analogically, will not be judged as translated from until is injected with some target-only characteristics. The configuration not only reduces the source-only characteristics but also forces the translated samples towards the target domain.
|Method||Age Group Gap|
Face aging dataset is collected by C-GAN  including face images. Ages are divided into age groups including , , , , , and . of the dataset is randomly selected as the test set, and the rest is the training set. All images are aligned and resized to resolution.
MAKEUP-A5 is a makeup-labeled dataset  containing aligned Asian woman faces with makeup categories including retro, Korean, Japanese, naked and smoky. The training set contains images and the rest as the test set. All images are resized to .
CFEE is an expression database  of facial expressions with identities. The categories of facial expressions include (A) neutral, (B) happy, (C) sad, (D) fearful, (E) angry, (F) surprised, (G) disgusted, (H) happily surprised, (I) happily disgusted, (J) sadly fearful, (K) sadly angry, (L) sadly surprised, (M) sadly disgusted, (N) fearfully angry, (O) fearfully surprised, (P) fearfully disgusted, (Q) angrily surprised, (R) angrily disgusted, (S) disgustedly surprised, (T) appalled, (U) hatred and (V) awed. We center crop and resize images to and randomly select identities as the test set and use the rest for training.
4.2 Measurements and Baselines
Intra FIDs [15, 28, 8] on each domain and mean of them are used for evaluation. FID is a common quantitative measure for generative models, which measures the 2-Wasserstein distance between the two distributions and on the feature extracted from InceptionV3 model. It is defined as 
where and are distributions of features of real data and synthetic data , and are the mean and the covariance of and . The mean intra FID is calculated by
where is the domain label among the total domains.
User studies by Amazon Mechanical Turk (AMT): Given an input image, target domain images translated by different methods are displayed to the Turkers who are asked to choose the best one.
Cosine similarity: For the face aging task, cosine similarity between the features of real images and the corresponding translated images is used to measure the degree of source retaining, where the features are extracted by a ResNet-18  trained on the same training set.
Baselines: StarGAN  has shown the best performance than DIAT , CycleGAN  and IcGAN . We, therefore, select StarGAN as our baseline to verify the superiority of our method. For face aging task, we additionally compare two classic GAN-based methods of face aging, including CAAE  and C-GAN (without transition pattern network) .
4.3 Implementation Details
For a fair comparison, our learning rate is fixed as , while the network architecture and other hyper-parameters are kept the same as StarGAN. Specifically, the architecture of the translator is adopted from Johnson et al. 
, which is composed of residual blocks, stride-2 convolution and transpose convolution to downsample and upsample features. For the discriminator, a()-dim source classifier is added to PatchGANs  for UGAN (UGAN-S). All experiments are optimized by Adam with and . The discriminator is iterated times per iteration of the translator. All baselines and our methods are trained epochs. Mini-batch size is set to
. All images are horizontally flipped with a probability ofas data augmentation.
4.4 Quantitative Experiments
Given domain label , we traverse all images from the test set to generate fake images. All the synthetic images of each domain are adopted to calculate intra FID and classification accuracy, while synthetic images of each domain are randomly sampled to be evaluated by AMT.
Face Aging: The comparison of results on face aging dataset is shown in Table 2. Face aging involves deformation and texture synthesis. For example, deformation, such as face shape and eye size, are the main differences between babies and adults. Texture synthesis, like adding wrinkles, is also essential when translating a middle-aged man to a senior man. In Table 2, both UGAN, and UGAN-S are significantly better than StarGAN on all age groups, where UGAN achieve the best performance. The mean intra FID drops from (StarGAN) to (UGAN). The relative drop is more than .
Makeup Editing: The comparison of results on MAKEUP-A5 dataset is shown in Table 3. Both texture and color need to be altered in makeup editing. UGAN has the best performance in all categories. The mean intra FID has declined from (StarGAN) to (UGAN).
Expression Editing: The comparisons on CFEE dataset are shown in Table 1. The expression editing task aims to change the emotion of a face by deformation. The CFEE dataset contains kinds of fine-grained expressions, which makes the expression editing problem very challenging. From the results, we can conclude that UGAN again achieves the best performance. The mean intra FID is (StarGAN), (UGAN-S) and (UGAN) respectively. It can be seen that the reduction is significant.
AMT User Studies: For further evaluation, user studies are conducted on AMT 111https://www.mturk.com/ to compare StarGAN and our method. Since UGAN outperforms UGAN-S for mean intra FID, only UGAN is compared. With datasets mentioned above, we synthesize pairs of images per domain by UGAN and StarGAN. All image pairs are shown to Turkers who are asked to choose the better one considering image realism and satisfaction of target characteristics. Table 4, 5 and 6 show the percentage of our method beating StarGAN. For example, in Table 5, when changing a face to years old, StarGAN wins in cases while our method wins in cases. It again shows the advantages of our method when transforming a face into childhood. Generally, our method is better than StarGAN on every category of each dataset.
Tackling the phenomenon of source retaining: The effect of erasing source characteristics on face aging is shown in Table 7. A well-trained ResNet-18 (for age recognition) is adopted to extract features (the second last layer). We calculate average cosine similarity on the neural feature of all source image and translated image pairs from the test set. Intuitively, the smaller the similarity, the more thoroughly source characteristics are erased. Since the images of adjacent age groups are similar, we only consider translation across a large age gap, e.g., across three age groups. In Table 7, we perform the experiments on multiple age group gaps, and the similarities of UGAN are smaller on all age group gaps.
4.5 Qualitative Experiments
Face Aging: Results on face aging dataset are shown in Figure 4. In the first example, an input image is a middle-aged man. By comparing the results of years old (second column), our result has obvious childish characteristics, e.g. round face, big eyes, and small nose, while the result of StarGAN does not look like a child. Another example is the years old case (last column). Our result has white hair, wrinkles, while StarGAN still produces a middle-aged face. Similar observations can be drawn from the second example of a woman input. These results show that UGAN can explicitly erase the characteristics of the source image by the source classifier in the discriminator.
Makeup editing: Two exemplary results on MAKEUP-A5 dataset are displayed in Figure 5. For the first woman, by comparing the results of the second (retro) and last (smoky) columns, we find that blusher and eye shadows of UGAN are more natural while StarGAN draws asymmetrical blusher and strange eye shadows. The result of UGAN is relatively natural when translating it to a naked face. Therefore, we conclude that UGAN has learned precise color and texture characteristics of different makeups.
Expression editing: Results on CFEE dataset are demonstrated in Figure 6. We have the following observations. Firstly, UGAN can well edit kinds of fine-grained facial expressions. Also, UGAN captures the subtle differences between basic and compound expressions. For example, “Happily surprised” has bigger eyes and raising eyebrows compared to “Happy”. Besides, the results of StarGAN under various expressions still retain the original expressions. For example, when changing the man from “Hatred” to “Happy”, the result of StarGAN still has tight brows. Comparatively, UGAN can effectively synthesize the “Happy” expression by generating a grin and relaxed brows and erasing the tight brows.
The phenomenon of source retaining often occurs in the image-to-image translation task. We propose Untraceable GAN to tackle it, where the discriminator estimates which domain the datum istranslated from. is trained to fool the discriminator to believe that the generated datum is translated from the target domain so that source-only characteristics are erased accordingly. In this way, the source domain of the synthesized image is untraceable. Extensive experiments on three tasks prove our significant advantages over the state-of-the-art StarGAN. More results can be found in the supplementary material.
The phenomenon of source retaining and the idea of UGAN are universal. For example, language translation  often preserves the grammatical structure of the source language. UGAN may serve as a solution to improve translation quality. We plan to study this idea in-depth and apply it in broader fields.
-  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv:1701.07875, 2017.
-  D. Berthelot, T. Schumm, and L. Metz. Began: boundary equilibrium generative adversarial networks. arXiv:1703.10717, 2017.
-  A. Brock, J. Donahue, and K. Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv:1809.11096, 2018.
-  Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, 2018.
-  T. Dekel, C. Gan, D. Krishnan, C. Liu, and W. T. Freeman. Sparse, smart contours to represent and edit images. In CVPR, 2018.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
-  E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015.
D. Dowson and B. Landau.
The fréchet distance between multivariate normal distributions.MA, 1982.
-  S. Du, Y. Tao, and A. M. Martinez. Compound facial expressions of emotion. PNAS, 2014.
-  P. Esser, E. Sutter, and B. Ommer. A variational u-net for conditional appearance and shape generation. In CVPR, 2018.
-  I. J. Goodfellow, J. Pougetabadie, M. Mirza, B. Xu, D. Wardefarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks. In NIPS, 2014.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples (2014). arXiv:1412.6572.
-  I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In NIPS, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
-  M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, 2017.
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.
Image-to-image translation with conditional adversarial networks.In CVPR. IEEE, 2017.
J. Johnson, A. Alahi, and L. Fei-Fei.
Perceptual losses for real-time style transfer and super-resolution.In ECCV, 2016.
-  A. Jolicoeur-Martineau. The relativistic discriminator: a key element missing from standard gan. arXiv:1807.00734, 2018.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv:1710.10196, 2017.
-  T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. arXiv:1703.05192, 2017.
-  G. Lample, A. Conneau, L. Denoyer, and M. Ranzato. Unsupervised machine translation using monolingual corpora only. ICLR, 2018.
-  M. Li, W. Zuo, and D. Zhang. Deep identity-aware transfer of facial attributes. arXiv:1610.05586, 2016.
-  T. Li, R. Qian, C. Dong, S. Liu, Q. Yan, W. Zhu, and L. Lin. Beautygan: Instance-level facial makeup transfer with deep generative adversarial network. In MM, 2018.
-  S. Liu, Y. Sun, D. Zhu, R. Bao, W. Wang, X. Shu, and S. Yan. Face aging with contextual generative adversarial nets. In MM, 2017.
-  X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial networks. In ICCV, 2017.
-  M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014.
-  T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. arXiv:1802.05957, 2018.
-  T. Miyato and M. Koyama. cgans with projection discriminator. arXiv:1802.05637, 2018.
-  S. Nowozin, B. Cseke, and R. Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In NIPS, 2016.
-  A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv:1610.09585, 2016.
-  G. Perarnau, J. van de Weijer, B. Raducanu, and J. M. Álvarez. Invertible conditional gans for image editing. arXiv:1611.06355, 2016.
-  A. Pumarola, A. Agudo, A. M. Martinez, A. Sanfeliu, and F. Moreno-Noguer. Ganimation: Anatomically-aware facial animation from a single image. In ECCV, 2018.
-  G.-J. Qi. Loss-sensitive generative adversarial networks on lipschitz densities. arXiv:1701.06264, 2017.
-  A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. In CVPR, 2017.
-  T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. arXiv:1711.11585, 2017.
-  S. Yao, T. M. H. Hsu, J.-Y. Zhu, J. Wu, A. Torralba, B. Freeman, and J. Tenenbaum. 3d-aware scene manipulation via inverse graphics. arXiv:1808.09351, 2018.
-  Z. Yi, H. R. Zhang, P. Tan, and M. Gong. Dualgan: Unsupervised dual learning for image-to-image translation. In ICCV, 2017.
-  G. Zhang, M. Kan, S. Shan, and X. Chen. Generative adversarial network with spatial attention for face attribute editing. In ECCV, 2018.
Z. Zhang, Y. Song, and H. Qi.
Age progression/regression by conditional adversarial autoencoder.In CVPR, 2017.
-  J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial network. arXiv:1609.03126, 2016.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv:1703.10593, 2017.
-  J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman. Toward multimodal image-to-image translation. In NIPS, 2017.