The recent advancements in several fundamental computer vision tasks[16, 4, 5] are unequivocally associated with the availability of huge annotated datasets for training deep architectures of increasing sophistication [8, 17, 24]. However, collecting or annotating such datasets are often challenging or expensive. An alternative, which is cheap and manageable, is to resort to computer gaming software [7, 38, 39] to render realistic virtual worlds; such software could supply unlimited amounts of training data and could also simulate real world scenarios that may be otherwise difficult to observe. Unfortunately, using data from a synthetic domain often introduces biases in the learned model, resulting in a domain shift that might hurt the performance of a downstream task [33, 49].
A standard way to account for domain shift is to adapt the synthetic images so that their statistics match that of the real domain. This is the classical domain adaptation problem [3, 14, 12], commonly called image-to-image translation when done on image pixels [40, 56, 46]. Most such translation algorithms require corresponding image pairs from the two domains [19, 11, 20]. However, thanks to generative adversarial networks (GAN), recent years have seen breakthroughs in unsupervised translations that do not require paired examples, instead need only sets of examples from the two domains, which are much easier to obtain [46, 26, 25, 56]
. However, the lack of correspondences results in a harder problem to solve, as one needs to estimate the image distributions and the adaptation function from the two sets – an ill-posed problem, since an infinite number of marginal distributions are possible that could have generated the finite examples in each of these sets.
To ameliorate this intractability, recent methods make assumptions on the problem domains or the mapping function. For example, in Liu et al. [26, 25], the two domains are assumed to share a common latent space. In Cycle GAN , the mapping is assumed invertible, i.e., a translated image when mapped back must be the same as the input image. Learning such a mapping could avoid some well-known pitfalls in GAN training such as mode collapse, and could allow learning a bijective mapping between the domains.
When dealing with real-world tasks, bijective mappings might not be sufficient to generate meaningful translations. For example, consider the real-to-synthetic translation task depicted in Figure 1. Here, the Cycle-GAN is trained on real images from the Cityscapes dataset  and synthetic road scene images are produced by the Mitsubishi Precision Simulator.111https://www.mpcnet.co.jp/e/e_product/sim/index.html
As is clear, Cycle-GAN has learned an incorrect mapping between the classes ’trees’ and ’sky’, resulting in implausible translations. Nevertheless, such a mapping is invertible as per the Cycle-GAN cost function. This problem happens because, in typical translation tasks the images in the two sets are assumed to be samples from the joint distribution of all their respective sub-classes (object segments), and the translation is a mapping between such joint distributions. Such a mapping (even with cycles) does not ensure that the marginals of the sub-classes (modes) are assigned correctly (e.g., skysky). To this end, we try to look beyond cyclic dependencies, and incorporate semantic consistency as well in the translation process.
In this paper, we propose a novel GAN architecture for pixel-level domain adaptation, coined semantically-consistent GAN (Sem-GAN), that takes as input two unpaired sets, each set consisting of tuples of images and their semantic segment labels, and learns a domain mapping function by optimizing over the standard min-max generator-discriminator GAN objective . However, differently from the standard GAN, we pass the generated images through a semantic segmentation network 
(in addition to the discriminator); this network trained to segment images in the target domain. It is expected that if the translation is ideal, the objects being translated will inherit their appearance from the target domain while maintaining their identity from the source domain. For example, when a ’car’ class in one domain is translated, a segmentation model trained to identify the ’car’ class in the target domain should identify the translated object in the same class. We use the discrepancy between the ground-truth semantic classes and their predictions as error cues (via cross-entropy loss) for improving the generator via backpropagation.
Given that semantic segmentation by itself is a difficult (and unsolved) computer vision problem, a natural question would be how useful it can be to include such an imperfect module in a GAN setup. Our experiments show that using a segmentation scheme that performs reasonably well (such as FCN ) is sufficient to ensure semantic consistency, leading to better translations. Further, we also propose to train the segmentation module jointly with the discriminator; as a result, its accuracy improves along with the generator-discriminator pair. A careful reader would have noticed, we are in fact solving a chicken-egg problem: on the one hand, we use GAN to improve semantic segmentation, while on the other hand, we are using segmentation to improve image-to-image translation. To clarify, we do not assume an accurate segmentation model, instead some model that performs reasonably well will suffice, which could be obtained via training a semantic segmentation model in a supervised setup using limited data; for example, we use about 1K annotated images in our experiments on the Cityscapes dataset. Our goal is to use this model to improve domain adaptation, so that we can adapt a large number of synthetic images to the target domain, to train a better segmentation model on the target domain.
The use of the segmentation models could help us even better. As alluded to above, the main challenge in standard image translation models is the inability of the network to find the correct mode mapping. We explore this facet of our framework by introducing semantic dropouts by stochastically blanking out semantic classes from the inputs so that the network can learn to map specific classes independently. We present experiments on a variety of image-to-image translation tasks and show that our scheme outperforms those using cycle-GAN significantly.
Before moving on, we summarize below our main contributions:
We introduce a novel feedback to the generator in GANs using predictions from a semantic segmentation model.
We propose a GAN architecture that includes a segmentation module and the whole framework trained in an end-to-end manner.
We introduce semantic dropout for improving our consistency loss.
We present experiments on several image-to-image translation tasks, demonstrating state-of-the-art results (sometimes by more than 20% in FCN score against Cycle-GAN) Further, we provide experiments using the proposed translation for training semantic segmentation models using large synthetic datasets, and show that our translations lead to significantly better segmentation models than state-of-the-art models (by 4-6% in mean IoU score).
2 Related Work
GANs [15, 43, 35, 1] allow learning complex data distributions automatically by pitting two CNNs (a generator and a discriminator) against each other using an adversarial loss. The optimum for this non-convex min-max game is when the generator produces data which the discriminator cannot distinguish from real data. This key idea has led to major advancements in applications requiring data synthesis such as: representation learning , image generation [50, 51, 35], text-to-image synthesis , inpainting 22], face-synthesis , style-transfer [20, 51, 40], and image editing .
The basic GAN [15, 35] framework is extended in Liu and Tuzel  to model the joint distribution of paired images from two distinct domains by coupling two GANs by sharing their weights. This scheme is extended in VAE-GAN  for image-to-image translation using auto-encoders to embed images to a shared space, on which the generators are conditioned. SimGAN  replaces the noise input in traditional GANs by synthetic images, and asking the generator to refine these images to look as real as possible. Similar to ours, SimGAN uses an FCN for translation consistency; however, this FCN is used to preserve the holistic structure of the images and not to preserve the class identities. In Cycle-GAN  and dual-GAN , the translations are required to be bijective. However, as noted earlier, such bijective mappings need not preserve semantic consistency. Other forms of consistencies have been explored in recent works. In Sangkloy et al. , consistency between sketch boundaries is presented, perceptual consistency is enforced in DeepSim , while geometric consistency is explored in Xu et al. . Self-similarity is used in Deng et al. , and feature-level consistency is assumed in X-GAN . In Luo et al. , domain alignment is used.
, the translation task is required to be invariant under a pre-defined criteria – such as a classifier performance. However, their tasks are different from ours and do not assume the availability of target segmenters like we do. Semantic consistency using attention is presented in DA-GAN. In Sankaranarayan et al. , a neighborhood-preserving feature embedding is introduced. Similarly Li et al.  uses a semantic-aware discriminator to preserve the high-level appearances. In triangle GAN , a semi-supervised approach is presented for paired examples. In bidirectional-GAN , the source class labels are used for semantic consistency. In one-sided GAN , the cycle-loss is replaced by neighborhood constraints. In comparison to these works, we tackle a different problem in which we assume to have access to (approximate) segmentation networks that can extract the source and target class labels.
To summarize, while semantic consistency has been explored from multiple facets in prior works, we are unaware of any work that explores segmentation consistency in the way we propose. We believe ours is the first work that leverages advances in the segmentation arena into the GAN framework for the image-to-image translation problem.
3 Proposed Method
In this section, we present our Sem-GAN framework. To set the stage, we first review in the following sections, important previous work on GAN and Cycle-GAN on which our scheme is based.
3.1 Problem Setup
Let be two image domains and let be sets of samples (images) from each domain respectively. Further, let and denote data samples. We assume and are unpaired, however, the two domains share semantic segment classes (with plausibly varied appearances). Assuming is the space of all segmentation masks with classes, let and be two functions mapping each pixel in each input image to their respective class label in the segmentation mask. In case, if we have access to the ground-truth masks, then we use and to denote these masks for inputs and respectively. Idealy, and . In this case, and form an image-ground-truth pair from each domain (however note that the pairs and remain unpaired).
3.2 Generative Adversarial Networks
A standard GAN 
consists of two convolutional neural networks (CNN), termed agenerator and a discriminator; the former takes random noise as input to produce an image, while the latter identifies if its input is a true or a generated image. The parameters of the generator and discriminator CNNs are optimized against an adversarial loss in a min-max game [1, 15, 35].
Extending this idea to an image-to-image translation setting, we define two generators and defined as and and two binary discriminators and , where and . Similarly, and . Here, with a slight abuse of notation, we assume is the set of fake images produced by a generator from domain . To learn the parameters of the generators and the discriminators, we define the following adversarial losses using binary cross-entropy:
) represent a non-convex game whose optimum parameters correspond to saddle points ( and thus typically difficult to optimize) it is often seen that with suitable heuristics[15, 35, 43] and careful choices for the loss [1, 31], the problem converges to practically useful solutions.
3.3 Cycle-Consistent GAN
An important problem that one often encounters when training GANs is that of mode collapse which happens when the generators learn to produce a few samples from the true data domain without completely spanning it. Given that the discriminator loss does not enforce diversity in its inputs; the optimization may converge to such local solutions. Among several workarounds proposed to tackle this problem [1, 35, 43], one that has been very promising in the image-translation setting is that of Cycle-GAN  that puts additional constraints to the GAN objective enforcing diversity implicitly. Specifically, the Cycle-GAN loss asks for the translated data to be re-translated back to their original inputs. Mathematically, this loss can be written as:
where is a suitable norm. Optimizing this requirement within a GAN formulation (1, 2) automatically demands that the generators and learn unique mappings such that they may be invertible to the original inputs; thereby elegantly avoiding the collapse of the data modes. The cyclic-constraints are depicted in Figure 2(b).
While, the cyclic constraint has resulted in several compelling image-to-image translation results on domains that are often uni-modal (such as horse-zebra, day-night etc.), it is theoretically unclear how the scheme may perform in a multi-modal setup. This is because, when there are multiple modes in either domains, any bijective mapping between the modes will satisfy the invertibility constraint in (3), such as in the example depicted in Figure 1. Recently, this assignment problem has been looked at in the work of Gananti et al. , hypothesizing that by constraining the space of possible translation functions (by controlling the capacity of the CNNs) may be allowing them to learn minimal-complexity mappings. While, this result is interesting, it may be practically difficult to achieve. Instead, we seek to constrain the possible mappings by guiding the generators to learn mappings that are semantically consistent with respect to a segmentation loss, viz. Sem-GAN.
3.4 Semantically-Consistent GAN
In this section, we present our Sem-GAN framework. Re-using notation from Section 3.1, we assume to have segmentation functions and that are trained to segment images from their respective domains. We do not assume that these segmenters are trained on and necessarily, but could be trained on external datasets that are similar to our domains. Using these segmenters, our semantic consistency constraint is written as:
where is a suitable loss comparing two segmentation masks , e.g., the cross-entropy loss used in FCN . Specifically, in (4), we enforce that the segmentation of by a function trained using images from domain should be preserved by a segmentation function trained on images from when applied to the translated image. When ground truth semantic labels and are available for either domains, we replace by and by in (4). In this case, Eq. (4) can be written as:
3.5 Overall Architecture
In Figure 4, we present our complete Sem-GAN architecture, combining the three losses. Notably, we include the two segmentation modules,
, as part of the setup; which are trained end-to-end alongside the generators and the discriminators. Using hyperparametersand , our full loss is given by:
With such an end-to-end architecture, that learns the segmenters and the discriminators together, there is a subtle but important risk. Note that, since and are learned alongside and
, it is often observed empirically that the segmenters will learn unrealistic appearances as valid ground truth classes when the generators are still in the learning phase. For example, suppose say in early epochs, the generator has not started translating valid ’car’ images. Instead, it translates the appearance of a ’person’ to a ’car’ class (this is possible as we do not have paired data). Now, when back-propagating the error to update the parameters of the segmenter using the ground truth mask for the translated image, the segmenter may incorrectly learn to map the ‘person’ appearance to ’car’ class. This crucial issue may fail the semantic consistency. To circumvent this problem, we propose to optimize the segmenters using their ground-truth labels alongside updating the discriminator parameters; i.e., we do not use the generator outputs to train the segmenters until they are accurate.
3.6 Semantic Dropout
The availability of segmenters and allow for applying some modifications to our inputs such that the translation networks can be trained more effectively. Specifically, we could arbitrarily mask out object classes from both the input images. As a result, a generator could learn to map corresponding classes individually (mode-to-mode translation rather than translating the joint distribution of all labels together). Precisely, let denote a segment mask for class , which has all zeros for all classes, except class (which is unity). To make the GAN learn class-to-class translation, we propose to transform input images and to and , where is the element-wise product. Next, we use these ’s in the above losses.
A problem with this scheme is that, while the network learns to transform individual classes using semantic dropout, it may miss learning the inter-class context within images. To this end, we propose to use the dropout stochastically, i.e., with a probability, we select a label randomly from classes common to a pair of tuples and . Next, using the respective ground truth masks, we create new semantic masks and , which are then used to select the respective image pixels to generate and as described above. The full dropout pipeline is provided in Algorithm 1.
We use three datasets and six image translation tasks to demonstrate the improvements afforded by Sem-GAN. Details of these datasets, tasks, network architectures, and our evaluation protocols follow. We also report results on improving the semantic segmentation accuracy.
Cityscapes (CS) :
consists of 5K densely annotated real-world road scene images collected from 50 European cities and annotated for 30 semantic classes. The dataset has moderate diversity in weather and lighting conditions.
Mitsubishi Precision (MP):
consists of about 20K road scene images generated by the Mitsubishi Precision Co. simulator and densely annotated for 36 semantic classes. The dataset has high-resolution images from varied weather (summer, winter, rain), lighting conditions (dawn, dusk, night), and object appearances.
(which is a recent version of the popular GTA5 dataset ) consists of 250K frames from driving videos in realistic virtual worlds generated by the Unity gaming engine. The dataset is densely annotated for 31 semantic classes and includes images from varied weather and lighting conditions.
4.2 Data Preparation
As the images in our datasets are of different resolutions, we resize them to a common size of 540 860 pixels. Further, since the synthetic images are from video sequences, nearby frames might be very similar. To this end, we uniformly sample 5K frames from each of the synthetic datasets. We map the semantic classes from all the datasets to a common subset, with the Cityscapes annotations as the reference. We find that 19 classes are common, and use only these to enforce semantic consistency. Details are available in the supplementary material. We report experiments on five bi-directional translation tasks, namely (i) CSMP, (ii) CSViper, (iii) CS Summer MP Winter, (iv) CS Day MP night, and (v) MP Summer MP Winter. We also present experiments on the task of mapping segmentation masks to real images (Seg CS) to show that conditioning the generators directly on the segment labels may not be a replacement to our scheme. In this case, we use unpaired translations, that is we have sets of masks and images, without correspondences. Thus, our setting is different from that of pix2pix . Note that, even though we use limited ground truth segmentation masks on the Cityscapes dataset, our problem remains unpaired as we do not assume correspondences between such image-label pairs across domains.
4.3 Network Architectures
We implemeted Sem-GAN from the code shared as part of Cycle-GAN222https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
using PyTorch. For the generators and discriminators in GAN, we use a sequence of 9 residual network blocks. The Adam optimizer  is used for training the networks, with an initial learning rate of . The validation accuracy seem to saturate in about 50 epochs for all our tasks, except for the SegCS task, for which we use 200 epochs. For our segmentation networks, we use the FCN  implementation in Pytorch.333https://github.com/ZijunDeng/pytorch-semantic-segmentation, which is cheaper and faster to train alongside other modules in our framework in comparison to other deeper networks such as Deeplab  and PSP-Net . In FCN, we use a VGG-16 network and use cross-entropy loss on the final output layer for enforcing the Sem-GAN criteria.
4.4 Training, Testing, and Evaluation
We define training, validation, and test sets by random sampling each dataset in 85:5:10 splits. The images are cropped to
pixels; the training inputs are cropped randomly during training (as part of data augmentation), while the validation and test images are center-cropped. The segmentation networks are pre-trained on the respective training sets to recognize 19 semantic classes. Note that, we use only images from ideal weather conditions (well-lit and good weather) for this training, while networks for other conditions (day, night, winter, etc.) are learned jointly with other modules in Sem-GAN. For training the segmenters, we fine-tune a VGG-16 model with batches of 16 images, optimizing the parameters using stochastic gradient descent with a learning rate ofand a momentum of 0.9. During testing, we do not use the segmentation pipeline, instead directly forward pass the source images through the generators and gather the translated images for evaluation.
For quantitative evaluations, we use the semantic segmentation accuracy of the translated images by a segmentation model trained on the respective domain. To ensure unbiased evaluation, we report results using two segmentation networks, namely (i) FCN and (ii) PSP-Net . The evaluation networks are trained separately from Sem-GAN on training sets from the respective domains. Using these models, we report results on (i) the overall accuracy (Over. Acc) – which is the number of pixels correctly predicted against the total number of annotated pixels, (ii) the average class accuracy (Avg. Acc) – which is the average of per-class accuracy, and (iii) the mean intersection-over-union (mIoU) score  over all the classes. On the 19 evaluation classes, FCN achieves mIoU of 64.1%, 56.2%, and 51.7% on the test sets of MP, Viper, and CS datasets respectively, while PSP-net gets 73.4%, 71.1% and 61.1%. We use 1K images randomly sampled from the Cityscapes dataset for training the respective segmentation models on this dataset.
4.5 Semantic Dropout
We first analyze the merit of semantic-dropout. This scheme has a parameter , which is the probability of dropping segments; a higher-value of drops segments too frequently; as a result, the generators may not be able to learn their spatial contexts. To this end, in Figures 5(a) and 5(b), we plot the mIoU for the MPCS tasks against varying . As is clear from the plots, semantic dropout improves the performance of translation significantly; e.g., on the CSMP task, the gap between and is 20%. We also see that a higher value shows lower accuracy. Thus, we use for CSMP and for MPCS and ViperCS.
4.6 State-of-the-Art Comparisons
In Tables 1 and 2, we compare Sem-GAN against three state-of-the-art image translators: (i) Cycle-GAN , (ii) VAE-GAN , and style-transfer using perceptual losses . We also report performances with and without semantic dropout (SM). As is clear, Sem-GAN (+ SM) outperforms Cycle-GAN in almost all tasks, especially on the challenging mIoU criteria. Specifically, we find that on MPCS and ViperCS tasks, our scheme is nearly 20% better in classification accuracies. Similar results are observed on other tasks as well, except in MP(Snow)MP(Winter) translations. In this case, the source and target domains are inherently the same, except for simulated snow in the latter, which can be undone by the generator, thereby perfectly aligning the domains. In Figures 5(d) and 5(c), we analyze the per-class IoU for the MPCS task. Note that not all classes are present in our (randomly chosen) test set. We see that Sem-GAN almost always shows superior translations on most classes. In Figure 6, qualitative results are presented. On the MaskCS task, Sem-GAN guides the error from the segmenters to be improve the appearance of the generated segments as demonstrated by results in Tables 1 and 2, leading to better results than the other models.
4.7 Improvements on Semantic Segmentation
Next, we analyze the merit of Sem-GAN for improving the original task, namely training semantic segmentation models via synthetic data. Our analysis is loosely based on , however using our datasets and evaluation models. We use 10K images from the two synthetic datasets and 200 images from the Cityscapes (CS) dataset. We translated the synthetic images (source) to the CS domain (as in ) and used source ground truth labels for training two segmentation models. We used a test set of 500 CS images for evaluating our models. All the algorithms are trained using SGD with a learning rate of 0.0001 for 50 epochs. As is clear from Table 5, Cycle GAN is sometimes seen to reduce the performance (e.g., Cy(VP)) against no adaptation likely due to the correspondence mismatch problems alluded to earlier. However, Sem-GAN improves image adaptation significantly compared to CycleGAN, and leads to more accurate segmentation models than when not using adaptation; e.g., ”CS only” with FCN8s is 19.9% mIoU, while using Sem-GAN i.e., CS+Sem(VP), this improves to 34.4%. Similarly, using PSPNet, ”CS only” to CS+Sem(VP) is 24.4% to 44.4%, a 20% improvement. Further, note that the improvement from CS+VP to CS+Sem(VP) is nearly 6%; the former is without any adaptation on VP images. More comparisons and results are available in the Supplementary material. The code for the paper will be made publicly available.
We presented an image-to-image translation framework that uses semantic consistency using segment class identities for achieving realistic translations. Modeling such a consistency as a novel loss, we presented an end-to-end learnable GAN architecture. We demonstrated the advantages of our framework on three datasets and six translation tasks. Our results clearly demonstrate that semantic consistency, as is proposed in this paper, is very important for ensuring the quality of the translation.
-  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
-  S. Benaim and L. Wolf. One-sided unsupervised domain mapping. In NIPS, 2017.
-  K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, 2017.
-  J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. arXiv preprint arXiv:1606.00915, 2016.
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele.
The cityscapes dataset for semantic urban scene understanding.In CVPR, pages 3213–3223, 2016.
-  C. R. de Souza, A. Gaidon, Y. Cabon, and A. L. Pena. Procedural generation of videos to train deep action recognition networks. In CVPR, 2017.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
-  W. Deng, L. Zheng, G. Kang, Y. Yang, Q. Ye, and J. Jiao. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. arXiv preprint arXiv:1711.07027, 2017.
-  A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016.
-  D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, 2015.
-  B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In ICCV, 2013.
-  Z. Gan, L. Chen, W. Wang, Y. Pu, Y. Zhang, H. Liu, C. Li, and L. Carin. Triangle generative adversarial networks. In NIPS, 2017.
-  Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, 2015.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
-  F. C. Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In CVPR, 2015.
-  J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell. Cycada: Cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213, 2017.
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.
Image-to-image translation with conditional adversarial networks.CVPR, 2017.
-  J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
-  P. Li, X. Liang, D. Jia, and E. P. Xing. Semantic-aware grad-gan for virtual-to-real urban scene adaption. arXiv preprint arXiv:1801.01726, 2018.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014.
-  M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In NIPS, 2017.
-  M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In NIPS, 2016.
-  S. Liu, Y. Sun, D. Zhu, R. Bao, W. Wang, X. Shu, and S. Yan. Face aging with contextual generative adversarial nets. In ACM on Multimedia Conference, 2017.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
-  Z. Luo, Y. Zou, J. Hoffman, and L. F. Fei-Fei. Label efficient learning of transferable representations acrosss domains and tasks. In NIPS, pages 164–176, 2017.
-  S. Ma, J. Fu, C. W. Chen, and T. Mei. Da-gan: Instance-level image translation by deep attention generative adversarial networks. arXiv preprint arXiv:1802.06454, 2018.
-  X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial networks. In ICCV, 2017.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
-  V. M. Patel, R. Gopalan, R. Li, and R. Chellappa. Visual domain adaptation: A survey of recent advances. IEEE signal processing magazine, 32(3):53–69, 2015.
-  D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
-  S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.
-  S. R. Richter, Z. Hayder, and V. Koltun. Playing for benchmarks. In ICCV, 2017.
-  S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playing for data: Ground truth from computer games. In ECCV, 2016.
-  G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez. The SYNTHIA dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In CVPR, 2016.
-  R. Rosales, K. Achan, and B. J. Frey. Unsupervised image translation. In ICCV, 2003.
-  A. Royer, K. Bousmalis, S. Gouws, F. Bertsch, I. Moressi, F. Cole, and K. Murphy. Xgan: Unsupervised image-to-image translation for many-to-many mappings. arXiv preprint arXiv:1711.05139, 2017.
-  P. Russo, F. M. Carlucci, T. Tommasi, and B. Caputo. From source to target and back: symmetric bi-directional adaptive gan. arXiv preprint arXiv:1705.08824, 2017.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
-  P. Sangkloy, J. Lu, C. Fang, F. Yu, and J. Hays. Scribbler: Controlling deep image synthesis with sketch and color. In CVPR, 2017.
-  S. Sankaranarayanan, Y. Balaji, A. Jain, S. N. Lim, and R. Chellappa. Unsupervised domain adaptation for semantic segmentation with gans. arXiv preprint arXiv:1711.06969, 2017.
-  A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. In CVPR, 2017.
-  Y. Taigman, A. Polyak, and L. Wolf. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200, 2016.
-  T. G. TAU, L. Wolf, and S. B. TAU. The role of minimal complexity functions in unsupervised learning of semantic mappings. arXiv preprint arXiv:1709.00074, 2017.
-  M. Wang and W. Deng. Deep visual domain adaptation: A survey. arXiv preprint arXiv:1802.03601, 2018.
-  T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. arXiv preprint arXiv:1711.11585, 2017.
-  X. Wang and A. Gupta. Generative image modeling using style and structure adversarial networks. In ECCV, 2016.
-  W. Xu, Y. Li, and C. Lu. Generating instance segmentation annotation by geometry-guided gan. arXiv preprint arXiv:1801.08839, 2018.
-  Z. Yi, H. Zhang, P. Tan, and M. Gong. Dualgan: Unsupervised dual learning for image-to-image translation. In CVPR, 2017.
-  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
-  J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros. Generative visual manipulation on the natural image manifold. In ECCV, 2016.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017.
Appendix A Additional Details and Comparisons
As mentioned in the main paper, we use 19 semantic segment classes with respect to the Cityscapes dataset for training our Sem-GAN framework. These classes are as follows: 1. ’road’, 2.’sidewalk’, 3.’building’, 4. ’wall’, 5. ’fence’, 6. ’pole’, 7. ’traffic light’, 8. ’traffic sign’, 9. ’vegetation’, 10. ’terrain’, 11. ’sky’, 12. ’person’, 13. ’rider’, 14. ’car’, 15. ’truck’, 16. ’bus’, 17. ’train’, 18. ’motorcycle’, 19. ’bicycle’. Below, we provide the per-class IoU for the following tasks: ViperCS Figure 7, MPCS Figure 8, CS summer MP winter Figure 9, and Seg Image (CS) Figure 10.
Appendix B Ablative Analysis
In Table 4, we provide an ablative study of the various elements in our framework. Interestingly, we find that adding the segmentation information into the translation process significantly improves the accuracy from Cycle-GAN, and ‘no cycle + seg’ is about 12% better than with cycle. This is perhaps because having segmentation information makes the translation process ‘easier’, while without that the cycle-GAN has to figure out the mapping between various segments automatically, which may lead to incorrect mappings. Adding cycle consistency still improves the performance, and seg+cycle+SM performs the best.444Note that, when we say ‘no cycle’, we mean that we do not use both the cycle-consistency and the identity constraint, as in the implementation of Cycle-GAN.
Appendix C When Ground Truth Masks are Unavailable
As alluded to in the main paper, we do not necessarily require the ground truth semantic masks for our scheme to work – instead we only need to have a segmentation model for the respective domains. To this end, we experiment this facet of our scheme on the task of translating ’horses’ to ’zebras’ using the dataset provided with Cycle-GAN . There are about 1300 images of horses and zebras in this dataset. For the segmentation models we use an FCN network trained on the MSCOCO dataset that has 80 semantic classes including ‘horse’ and ’zebra’. We do not train these models within our Sem-GAN setup. Qualitative results from this experiment are provided in Figure 11. To ensure the translations are cross-domain, that is the source is, say the ’horse’ class and the target is the ’zebra’ class, for defining the consistency criteria, we switch the labels of the source segmenter (which in this case will identify ’horse’) to ’zebra’, and vice versa for the other translation direction. For this task, we trained both Cycle GAN and Sem GAN for 200 epochs. We used a 9-block ResNet for the generator.
A point to be noted in this task is that, while the results of both Cycle-GAN and Sem-GAN are more or less similar, the translation with Sem-GAN is slightly better (qualitatively) when multiple classes are present in the images – such humans (see for example, the last two rows in Figure 11). This is because, the MS-COCO segmentation dataset includes a ‘person’ class as well. While, the results seem better, there still remains a lot to improve; especially to get the structure of the objects within a segment.
Appendix D Additional Results on Semantic Segmentation Task
In addition to the results in Table 3 in the main paper, in Table 5 we provide additional results on the semantic segmentation task using synthetic images (translated using Cycle-GAN or Sem-GAN)) for training segmentation models. The additional results are for segmentation models trained only on translated synthetic images (not using real images from the domain or their ground truths) – such as Cy(VP) and Sm(VP). Interestingly, we find that using only Sm(VP) is better than using VP only (21.1 against 23.4%) and using MP only to Sm(MP) is increased from 13.0 to 22.8% in mIoU clearly demonstrating that our Sem-GAN leads to much better domain adaptations than using the synthetic images directly. We also see that Cy(MP) and Cy(VP) are inferior in performance.
Appendix E Qualitative Results
From Figure LABEL:fig:mms2cs_quals onwards, we provide additional qualitative results on the tasks we described in the main paper.