Sem-GAN: Semantically-Consistent Image-to-Image Translation

by   Anoop Cherian, et al.

Unpaired image-to-image translation is the problem of mapping an image in the source domain to one in the target domain, without requiring corresponding image pairs. To ensure the translated images are realistically plausible, recent works, such as Cycle-GAN, demands this mapping to be invertible. While, this requirement demonstrates promising results when the domains are unimodal, its performance is unpredictable in a multi-modal scenario such as in an image segmentation task. This is because, invertibility does not necessarily enforce semantic correctness. To this end, we present a semantically-consistent GAN framework, dubbed Sem-GAN, in which the semantics are defined by the class identities of image segments in the source domain as produced by a semantic segmentation algorithm. Our proposed framework includes consistency constraints on the translation task that, together with the GAN loss and the cycle-constraints, enforces that the images when translated will inherit the appearances of the target domain, while (approximately) maintaining their identities from the source domain. We present experiments on several image-to-image translation tasks and demonstrate that Sem-GAN improves the quality of the translated images significantly, sometimes by more than 20 the FCN score. Further, we show that semantic segmentation models, trained with synthetic images translated via Sem-GAN, leads to significantly better segmentation results than other variants.



There are no comments yet.


page 1

page 9

page 14

page 15

page 16

page 17

page 18

page 19


Semantic Consistency in Image-to-Image Translation for Unsupervised Domain Adaptation

Unsupervised Domain Adaptation (UDA) aims to adapt models trained on a s...

UGAN: Untraceable GAN for Multi-Domain Face Translation

The multi-domain image-to-image translation is received increasing atten...

UVCGAN: UNet Vision Transformer cycle-consistent GAN for unpaired image-to-image translation

Image-to-image translation has broad applications in art, design, and sc...

Twin-GAN -- Unpaired Cross-Domain Image Translation with Weight-Sharing GANs

We present a framework for translating unlabeled images from one domain ...

Lipschitz Regularized CycleGAN for Improving Semantic Robustness in Unpaired Image-to-image Translation

For unpaired image-to-image translation tasks, GAN-based approaches are ...

Invertible Autoencoder for domain adaptation

The unsupervised image-to-image translation aims at finding a mapping be...

Generative Transition Mechanism to Image-to-Image Translation via Encoded Transformation

In this paper, we revisit the Image-to-Image (I2I) translation problem w...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The recent advancements in several fundamental computer vision tasks 

[16, 4, 5] are unequivocally associated with the availability of huge annotated datasets for training deep architectures of increasing sophistication [8, 17, 24]. However, collecting or annotating such datasets are often challenging or expensive. An alternative, which is cheap and manageable, is to resort to computer gaming software [7, 38, 39] to render realistic virtual worlds; such software could supply unlimited amounts of training data and could also simulate real world scenarios that may be otherwise difficult to observe. Unfortunately, using data from a synthetic domain often introduces biases in the learned model, resulting in a domain shift that might hurt the performance of a downstream task [33, 49].

(a) RealSynthetic
(b) Cycle GAN
(c) Sem-GAN (Ours)
Figure 1: Translation of an image from the Cityscapes dataset (leftmost) to a synthetic domain. The state-of-the-art model Cycle-GAN [56] incorrectly maps ’trees’ to ‘sky’. Using Sem-GAN, that enforces semantic consistency in the translation process, results in more realistic translations.

A standard way to account for domain shift is to adapt the synthetic images so that their statistics match that of the real domain. This is the classical domain adaptation problem [3, 14, 12], commonly called image-to-image translation when done on image pixels [40, 56, 46]. Most such translation algorithms require corresponding image pairs from the two domains [19, 11, 20]. However, thanks to generative adversarial networks (GAN), recent years have seen breakthroughs in unsupervised translations that do not require paired examples, instead need only sets of examples from the two domains, which are much easier to obtain [46, 26, 25, 56]

. However, the lack of correspondences results in a harder problem to solve, as one needs to estimate the image distributions and the adaptation function from the two sets – an ill-posed problem, since an infinite number of marginal distributions are possible that could have generated the finite examples in each of these sets.

To ameliorate this intractability, recent methods make assumptions on the problem domains or the mapping function. For example, in Liu et al. [26, 25], the two domains are assumed to share a common latent space. In Cycle GAN [56], the mapping is assumed invertible, i.e., a translated image when mapped back must be the same as the input image. Learning such a mapping could avoid some well-known pitfalls in GAN training such as mode collapse, and could allow learning a bijective mapping between the domains.

When dealing with real-world tasks, bijective mappings might not be sufficient to generate meaningful translations. For example, consider the real-to-synthetic translation task depicted in Figure 1. Here, the Cycle-GAN is trained on real images from the Cityscapes dataset [6] and synthetic road scene images are produced by the Mitsubishi Precision Simulator.111

As is clear, Cycle-GAN has learned an incorrect mapping between the classes ’trees’ and ’sky’, resulting in implausible translations. Nevertheless, such a mapping is invertible as per the Cycle-GAN cost function. This problem happens because, in typical translation tasks the images in the two sets are assumed to be samples from the joint distribution of all their respective sub-classes (object segments), and the translation is a mapping between such joint distributions. Such a mapping (even with cycles) does not ensure that the marginals of the sub-classes (modes) are assigned correctly (e.g., sky

sky). To this end, we try to look beyond cyclic dependencies, and incorporate semantic consistency as well in the translation process.

In this paper, we propose a novel GAN architecture for pixel-level domain adaptation, coined semantically-consistent GAN (Sem-GAN), that takes as input two unpaired sets, each set consisting of tuples of images and their semantic segment labels, and learns a domain mapping function by optimizing over the standard min-max generator-discriminator GAN objective [15]. However, differently from the standard GAN, we pass the generated images through a semantic segmentation network [28]

(in addition to the discriminator); this network trained to segment images in the target domain. It is expected that if the translation is ideal, the objects being translated will inherit their appearance from the target domain while maintaining their identity from the source domain. For example, when a ’car’ class in one domain is translated, a segmentation model trained to identify the ’car’ class in the target domain should identify the translated object in the same class. We use the discrepancy between the ground-truth semantic classes and their predictions as error cues (via cross-entropy loss) for improving the generator via backpropagation.

Given that semantic segmentation by itself is a difficult (and unsolved) computer vision problem, a natural question would be how useful it can be to include such an imperfect module in a GAN setup. Our experiments show that using a segmentation scheme that performs reasonably well (such as FCN [28]) is sufficient to ensure semantic consistency, leading to better translations. Further, we also propose to train the segmentation module jointly with the discriminator; as a result, its accuracy improves along with the generator-discriminator pair. A careful reader would have noticed, we are in fact solving a chicken-egg problem: on the one hand, we use GAN to improve semantic segmentation, while on the other hand, we are using segmentation to improve image-to-image translation. To clarify, we do not assume an accurate segmentation model, instead some model that performs reasonably well will suffice, which could be obtained via training a semantic segmentation model in a supervised setup using limited data; for example, we use about 1K annotated images in our experiments on the Cityscapes dataset. Our goal is to use this model to improve domain adaptation, so that we can adapt a large number of synthetic images to the target domain, to train a better segmentation model on the target domain.

The use of the segmentation models could help us even better. As alluded to above, the main challenge in standard image translation models is the inability of the network to find the correct mode mapping. We explore this facet of our framework by introducing semantic dropouts by stochastically blanking out semantic classes from the inputs so that the network can learn to map specific classes independently. We present experiments on a variety of image-to-image translation tasks and show that our scheme outperforms those using cycle-GAN significantly.

Before moving on, we summarize below our main contributions:

  • We introduce a novel feedback to the generator in GANs using predictions from a semantic segmentation model.

  • We propose a GAN architecture that includes a segmentation module and the whole framework trained in an end-to-end manner.

  • We introduce semantic dropout for improving our consistency loss.

  • We present experiments on several image-to-image translation tasks, demonstrating state-of-the-art results (sometimes by more than 20% in FCN score against Cycle-GAN) Further, we provide experiments using the proposed translation for training semantic segmentation models using large synthetic datasets, and show that our translations lead to significantly better segmentation models than state-of-the-art models (by 4-6% in mean IoU score).

2 Related Work

GANs [15, 43, 35, 1] allow learning complex data distributions automatically by pitting two CNNs (a generator and a discriminator) against each other using an adversarial loss. The optimum for this non-convex min-max game is when the generator produces data which the discriminator cannot distinguish from real data. This key idea has led to major advancements in applications requiring data synthesis such as: representation learning [35], image generation [50, 51, 35], text-to-image synthesis [36], inpainting [34]

, super-resolution 

[22], face-synthesis [27], style-transfer [20, 51, 40], and image editing [55].

The basic GAN [15, 35] framework is extended in Liu and Tuzel [26] to model the joint distribution of paired images from two distinct domains by coupling two GANs by sharing their weights. This scheme is extended in VAE-GAN [25] for image-to-image translation using auto-encoders to embed images to a shared space, on which the generators are conditioned. SimGAN [46] replaces the noise input in traditional GANs by synthetic images, and asking the generator to refine these images to look as real as possible. Similar to ours, SimGAN uses an FCN for translation consistency; however, this FCN is used to preserve the holistic structure of the images and not to preserve the class identities. In Cycle-GAN [56] and dual-GAN [53], the translations are required to be bijective. However, as noted earlier, such bijective mappings need not preserve semantic consistency. Other forms of consistencies have been explored in recent works. In Sangkloy et al. [44], consistency between sketch boundaries is presented, perceptual consistency is enforced in DeepSim [10], while geometric consistency is explored in Xu et al. [52]. Self-similarity is used in Deng et al. [9], and feature-level consistency is assumed in X-GAN [41]. In Luo et al. [29], domain alignment is used.

There are several works concurrent to ours that explore other ways for semantic consistency. In PixelDA [3], DTN [47], and CyCADA [18]

, the translation task is required to be invariant under a pre-defined criteria – such as a classifier performance. However, their tasks are different from ours and do not assume the availability of target segmenters like we do. Semantic consistency using attention is presented in DA-GAN 

[30]. In Sankaranarayan et al. [45], a neighborhood-preserving feature embedding is introduced. Similarly Li et al. [23] uses a semantic-aware discriminator to preserve the high-level appearances. In triangle GAN [13], a semi-supervised approach is presented for paired examples. In bidirectional-GAN [42], the source class labels are used for semantic consistency. In one-sided GAN [2], the cycle-loss is replaced by neighborhood constraints. In comparison to these works, we tackle a different problem in which we assume to have access to (approximate) segmentation networks that can extract the source and target class labels.

To summarize, while semantic consistency has been explored from multiple facets in prior works, we are unaware of any work that explores segmentation consistency in the way we propose. We believe ours is the first work that leverages advances in the segmentation arena into the GAN framework for the image-to-image translation problem.

3 Proposed Method

In this section, we present our Sem-GAN framework. To set the stage, we first review in the following sections, important previous work on GAN and Cycle-GAN on which our scheme is based.

3.1 Problem Setup

Let be two image domains and let be sets of samples (images) from each domain respectively. Further, let and denote data samples. We assume and are unpaired, however, the two domains share semantic segment classes (with plausibly varied appearances). Assuming is the space of all segmentation masks with classes, let and be two functions mapping each pixel in each input image to their respective class label in the segmentation mask. In case, if we have access to the ground-truth masks, then we use and to denote these masks for inputs and respectively. Idealy, and . In this case, and form an image-ground-truth pair from each domain (however note that the pairs and remain unpaired).

3.2 Generative Adversarial Networks

A standard GAN [15]

consists of two convolutional neural networks (CNN), termed a

generator and a discriminator; the former takes random noise as input to produce an image, while the latter identifies if its input is a true or a generated image. The parameters of the generator and discriminator CNNs are optimized against an adversarial loss in a min-max game [1, 15, 35].

Extending this idea to an image-to-image translation setting, we define two generators and defined as and and two binary discriminators and , where and . Similarly, and . Here, with a slight abuse of notation, we assume is the set of fake images produced by a generator from domain . To learn the parameters of the generators and the discriminators, we define the following adversarial losses using binary cross-entropy:


The GAN architecture for these objectives is graphically illustrated in Figure 2(a). While, (1) and (2

) represent a non-convex game whose optimum parameters correspond to saddle points ( and thus typically difficult to optimize) it is often seen that with suitable heuristics 

[15, 35, 43] and careful choices for the loss  [1, 31], the problem converges to practically useful solutions.

(a) GAN-Loss
(b) Cycle-Loss
Figure 2: Illustrations of the objectives. The inputs are highlighted in ‘blue’ circles.

3.3 Cycle-Consistent GAN

An important problem that one often encounters when training GANs is that of mode collapse which happens when the generators learn to produce a few samples from the true data domain without completely spanning it. Given that the discriminator loss does not enforce diversity in its inputs; the optimization may converge to such local solutions. Among several workarounds proposed to tackle this problem [1, 35, 43], one that has been very promising in the image-translation setting is that of Cycle-GAN [56] that puts additional constraints to the GAN objective enforcing diversity implicitly. Specifically, the Cycle-GAN loss asks for the translated data to be re-translated back to their original inputs. Mathematically, this loss can be written as:


where is a suitable norm. Optimizing this requirement within a GAN formulation (12) automatically demands that the generators and learn unique mappings such that they may be invertible to the original inputs; thereby elegantly avoiding the collapse of the data modes. The cyclic-constraints are depicted in Figure 2(b).

While, the cyclic constraint has resulted in several compelling image-to-image translation results on domains that are often uni-modal (such as horse-zebra, day-night etc.), it is theoretically unclear how the scheme may perform in a multi-modal setup. This is because, when there are multiple modes in either domains, any bijective mapping between the modes will satisfy the invertibility constraint in (3), such as in the example depicted in Figure 1. Recently, this assignment problem has been looked at in the work of Gananti et al. [48], hypothesizing that by constraining the space of possible translation functions (by controlling the capacity of the CNNs) may be allowing them to learn minimal-complexity mappings. While, this result is interesting, it may be practically difficult to achieve. Instead, we seek to constrain the possible mappings by guiding the generators to learn mappings that are semantically consistent with respect to a segmentation loss, viz. Sem-GAN.

3.4 Semantically-Consistent GAN

In this section, we present our Sem-GAN framework. Re-using notation from Section 3.1, we assume to have segmentation functions and that are trained to segment images from their respective domains. We do not assume that these segmenters are trained on and necessarily, but could be trained on external datasets that are similar to our domains. Using these segmenters, our semantic consistency constraint is written as:


where is a suitable loss comparing two segmentation masks , e.g., the cross-entropy loss used in FCN [28]. Specifically, in (4), we enforce that the segmentation of by a function trained using images from domain should be preserved by a segmentation function trained on images from when applied to the translated image. When ground truth semantic labels and are available for either domains, we replace by and by in (4). In this case, Eq. (4) can be written as:


Figure 3(a) and 3(b) illustrate these two variants of our Seg-GAN losses. For convenience in the depiction, we have introduced additional variables to denote the outputs of the segmentation modules.

(a) Seg-Loss w/o ground truth
(b) Seg-Loss w/ ground truth
Figure 3: Two variants of our segmentation consistency model. Left: when the ground truth annotations are not used. Right: when they are used.

3.5 Overall Architecture

In Figure 4, we present our complete Sem-GAN architecture, combining the three losses. Notably, we include the two segmentation modules,

, as part of the setup; which are trained end-to-end alongside the generators and the discriminators. Using hyperparameters

and , our full loss is given by:


With such an end-to-end architecture, that learns the segmenters and the discriminators together, there is a subtle but important risk. Note that, since and are learned alongside and

, it is often observed empirically that the segmenters will learn unrealistic appearances as valid ground truth classes when the generators are still in the learning phase. For example, suppose say in early epochs, the generator has not started translating valid ’car’ images. Instead, it translates the appearance of a ’person’ to a ’car’ class (this is possible as we do not have paired data). Now, when back-propagating the error to update the parameters of the segmenter using the ground truth mask for the translated image, the segmenter may incorrectly learn to map the ‘person’ appearance to ’car’ class. This crucial issue may fail the semantic consistency. To circumvent this problem, we propose to optimize the segmenters using their ground-truth labels alongside updating the discriminator parameters; i.e., we do not use the generator outputs to train the segmenters until they are accurate.

Figure 4: Overall architecture. Note that some modules are repeated on the left and the right parts of our illustration to avoid cluttered cross-connections. Thus, blocks with the same color represent the same module.

3.6 Semantic Dropout

The availability of segmenters and allow for applying some modifications to our inputs such that the translation networks can be trained more effectively. Specifically, we could arbitrarily mask out object classes from both the input images. As a result, a generator could learn to map corresponding classes individually (mode-to-mode translation rather than translating the joint distribution of all labels together). Precisely, let denote a segment mask for class , which has all zeros for all classes, except class (which is unity). To make the GAN learn class-to-class translation, we propose to transform input images and to and , where is the element-wise product. Next, we use these ’s in the above losses.

A problem with this scheme is that, while the network learns to transform individual classes using semantic dropout, it may miss learning the inter-class context within images. To this end, we propose to use the dropout stochastically, i.e., with a probability

, we select a label randomly from classes common to a pair of tuples and . Next, using the respective ground truth masks, we create new semantic masks and , which are then used to select the respective image pixels to generate and as described above. The full dropout pipeline is provided in Algorithm 1.

Input: tuples and ; dropout
 // get labels in
 // find common labels
if  then
       // is the set cardinality.
       // generate a mask on with
       return , ;
       // assuming ’s are 1-indexed.
       return ,
end if
Algorithm 1 Semantic dropout.

4 Experiments

We use three datasets and six image translation tasks to demonstrate the improvements afforded by Sem-GAN. Details of these datasets, tasks, network architectures, and our evaluation protocols follow. We also report results on improving the semantic segmentation accuracy.

4.1 Datasets

Cityscapes (CS) [6]:

consists of 5K densely annotated real-world road scene images collected from 50 European cities and annotated for 30 semantic classes. The dataset has moderate diversity in weather and lighting conditions.

Mitsubishi Precision (MP):

consists of about 20K road scene images generated by the Mitsubishi Precision Co. simulator and densely annotated for 36 semantic classes. The dataset has high-resolution images from varied weather (summer, winter, rain), lighting conditions (dawn, dusk, night), and object appearances.

Viper [37]:

(which is a recent version of the popular GTA5 dataset [38]) consists of 250K frames from driving videos in realistic virtual worlds generated by the Unity gaming engine. The dataset is densely annotated for 31 semantic classes and includes images from varied weather and lighting conditions.

4.2 Data Preparation

As the images in our datasets are of different resolutions, we resize them to a common size of 540 860 pixels. Further, since the synthetic images are from video sequences, nearby frames might be very similar. To this end, we uniformly sample 5K frames from each of the synthetic datasets. We map the semantic classes from all the datasets to a common subset, with the Cityscapes annotations as the reference. We find that 19 classes are common, and use only these to enforce semantic consistency. Details are available in the supplementary material. We report experiments on five bi-directional translation tasks, namely (i) CSMP, (ii) CSViper, (iii) CS Summer MP Winter, (iv) CS Day MP night, and (v) MP Summer MP Winter. We also present experiments on the task of mapping segmentation masks to real images (Seg CS) to show that conditioning the generators directly on the segment labels may not be a replacement to our scheme. In this case, we use unpaired translations, that is we have sets of masks and images, without correspondences. Thus, our setting is different from that of pix2pix [19]. Note that, even though we use limited ground truth segmentation masks on the Cityscapes dataset, our problem remains unpaired as we do not assume correspondences between such image-label pairs across domains.

4.3 Network Architectures

We implemeted Sem-GAN from the code shared as part of Cycle-GAN222

using PyTorch 

[32]. For the generators and discriminators in GAN, we use a sequence of 9 residual network blocks. The Adam optimizer [21] is used for training the networks, with an initial learning rate of . The validation accuracy seem to saturate in about 50 epochs for all our tasks, except for the SegCS task, for which we use 200 epochs. For our segmentation networks, we use the FCN [28] implementation in Pytorch.333, which is cheaper and faster to train alongside other modules in our framework in comparison to other deeper networks such as Deeplab [5] and PSP-Net [54]. In FCN, we use a VGG-16 network and use cross-entropy loss on the final output layer for enforcing the Sem-GAN criteria.

4.4 Training, Testing, and Evaluation

We define training, validation, and test sets by random sampling each dataset in 85:5:10 splits. The images are cropped to

pixels; the training inputs are cropped randomly during training (as part of data augmentation), while the validation and test images are center-cropped. The segmentation networks are pre-trained on the respective training sets to recognize 19 semantic classes. Note that, we use only images from ideal weather conditions (well-lit and good weather) for this training, while networks for other conditions (day, night, winter, etc.) are learned jointly with other modules in Sem-GAN. For training the segmenters, we fine-tune a VGG-16 model with batches of 16 images, optimizing the parameters using stochastic gradient descent with a learning rate of

and a momentum of 0.9. During testing, we do not use the segmentation pipeline, instead directly forward pass the source images through the generators and gather the translated images for evaluation.

For quantitative evaluations, we use the semantic segmentation accuracy of the translated images by a segmentation model trained on the respective domain. To ensure unbiased evaluation, we report results using two segmentation networks, namely (i) FCN and (ii) PSP-Net [54]. The evaluation networks are trained separately from Sem-GAN on training sets from the respective domains. Using these models, we report results on (i) the overall accuracy (Over. Acc) – which is the number of pixels correctly predicted against the total number of annotated pixels, (ii) the average class accuracy (Avg. Acc) – which is the average of per-class accuracy, and (iii) the mean intersection-over-union (mIoU) score [28] over all the classes. On the 19 evaluation classes, FCN achieves mIoU of 64.1%, 56.2%, and 51.7% on the test sets of MP, Viper, and CS datasets respectively, while PSP-net gets 73.4%, 71.1% and 61.1%. We use 1K images randomly sampled from the Cityscapes dataset for training the respective segmentation models on this dataset.

4.5 Semantic Dropout

We first analyze the merit of semantic-dropout. This scheme has a parameter , which is the probability of dropping segments; a higher-value of drops segments too frequently; as a result, the generators may not be able to learn their spatial contexts. To this end, in Figures 5(a) and 5(b), we plot the mIoU for the MPCS tasks against varying . As is clear from the plots, semantic dropout improves the performance of translation significantly; e.g., on the CSMP task, the gap between and is 20%. We also see that a higher value shows lower accuracy. Thus, we use for CSMP and for MPCS and ViperCS.

width=1 AB AB BA Task Scheme Avg. Acc Over. Acc mIoU Avg. Acc Over. Acc mIoU MPCS VAE-GAN 42.6 59.4 13.2 13.2 30.9 6.1 Style-Trans. 44.5 72.1 26.3 15.6 27.8 6.3 Cycle-GAN 36.7 56.2 16.2 19.5 36.9 7.2 Sem-GAN 51.5 71.9 34.1 29.1 58.4 18.3 Sem-GAN+SM 60.7 80.2 40.2 29.4 67.8 19.3 ViperCS VAE-GAN 31.3 54.5 13.4 18.9 36.4 8.6 Style Trans. 29.3 64.7 13.7 17.9 61.6 11.0 Cycle-GAN 23.6 54.1 9.2 21.0 63.9 13.4 Sem-GAN 38.8 82.0 24.0 27.4 80.3 20.4 Sem-GAN+SM 42.5 84.2 28.4 27.7 81.6 21.5 MP(N)CS(D) VAE-GAN 22.6 39.6 7.4 10.7 20.0 5.5 Cycle-GAN 32.8 47.5 10.7 14.8 48.2 7.1 Sem-GAN 54.1 79.7 32.7 27.6 78.3 20.5 Sem-GAN+SM 56.2 80.0 36.7 28.6 78. 1 20.2 MP(W)CS(S) VAE-GAN 26.2 48.5 8.0 9.08 41.1 4.7 Cycle-GAN 27.1 65.9 13.2 12.8 51.9 7.1 Sem-GAN 51.3 85.9 32.3 22.4 76.4 16.2 Sem-GAN+SM 50.1 84.2 34.2 22.5 72.1 16.9 MP(S)MP(W) VAE-GAN 53.0 87.1 41.8 57.6 75.9 45.2 Cycle-GAN 60.0 74.6 47.4 61.8 91.0 51.0 Sem-GAN 53.1 75.9 43.4 62.9 92.2 52.3 Sem-GAN+SM 54.7 75.1 45.5 63.2 92.3 53.3 SegCS VAE-GAN 7.36 48.0 4.0 NA NA NA Cycle-GAN 12.6 37.7 7.8 NA NA NA Sem-GAN 35.6 75.0 26.6 NA NA NA

Table 1: Results using our Sem-GAN and semantic dropout (SM) against the state-of-the-art Cycle-GAN [56], VAE-GAN [25], and style transfer model [20]. We use the FCN [28] for evaluation. All numbers are in %. W=Winter, S=Summer, D=Day, and N=Night.

width=1 AB AB BA Task Scheme Avg. Acc Over. Acc mIoU Avg. Acc Over. Acc mIoU MPCS Cycle-GAN 45.6 62.6 21.5 23.7 55.9 11.6 Sem-GAN 50.0 77.4 34.2 30.1 71.2 18.6 ViperCS Cycle-GAN 29.8 57.3 15.4 24.6 70.7 17.3 Sem-GAN 38.3 74.0 23.2 30.9 80.8 23.5 MP(N)CS(D) Cycle-GAN 34.1 48.2 15.5 19.0 57.2 8.97 Sem-GAN 49.8 74.0 27.9 29.3 77.5 20.2 MP(W)CS(S) Cycle-GAN 33.9 63.3 14.8 13.4 52.1 7.39 Sem-GAN 48.3 76.8 26.2 23.5 75.8 16.9 MP(S)MP(W) Sem-GAN 64.6 76.7 51.1 65.6 91.6 54.8 Sem-GAN 57.9 76.7 57.2 63.2 90.7 51.9 SegCS Cycle-GAN 16.2 46.0 9.92 NA NA NA Sem-GAN 19.7 55.0 13.4 NA NA NA

Table 2: Comparisons between Sem-GAN and Cycle-GAN [56] using PSP-net [54] segmentation model for evaluation. W=Winter, S=Summer, D=Day, and N=Night.

width=1 Method n/w Road s.walk Bldg wall fence pole t. light t. sign veg. terrain sky person rider car truck bus train m.cycle bicycle mIoU Ov. Acc CS only FCN 85.1 38.9 60.6 0.8 1.1 0.0 0.0 0.0 65.1 7.4 36.3 19.8 0.0 62.7 0.0 0.0 0.0 0.0 0.4 19.9 77.4 CS + MP FCN 87.0 41.8 64.6 14.5 0.2 0.6 0.1 0.4 68.6 11.7 67.3 11.2 0.0 63.6 0.0 0.0 0.0 0.0 0.1 22.7 79.3 CS+Cy(MP) FCN 85.5 40.3 63.6 6.9 0.0 3.2 0.4 3.5 69.0 7.8 52.2 11.7 0.0 62.5 0.0 0.0 0.0 0.0 2.6 21.5 77.8 CS+Sm(MP) FCN 88.1 47.0 67.8 12.8 0.5 7.1 0.0 2.0 71.1 10.0 69.0 15.4 0.0 67.6 0.0 0.0 0.0 0.0 4.0 24.3 80.8 CS + VP FCN 90.2 54.1 70.3 22.6 5.9 3.4 0.0 3.4 73.4 27.2 67.1 31.4 0.3 73.2 9.1 18.7 5.6 0.2 21.2 30.4 83.2 CS+Cy(VP) FCN 90.3 53.2 67.4 9.4 15.3 2.6 0.1 4.8 70.6 26.4 60.9 36.4 0.9 74.2 21.1 24.9 3.4 5.0 30.4 31.4 82.8 CS+Sm(VP) FCN 92.1 59.1 71.3 21.6 19.1 4.4 0.2 5.6 74.1 30.2 70.1 36.4 1.3 76.8 24.2 20.5 11.7 4.3 30.7 34.4 84.6 CS only PSP 85.2 35.2 62.9 4.1 15.4 0.5 0.0 2.6 68.6 24.3 49.0 27.2 0.0 63.6 0.0 4.8 0.0 0.0 19.7 24.4 78.7 CS+MP PSP 88.8 49.5 70.2 7.0 4.7 9.0 0.0 13.4 72.8 20.2 74.9 38.2 0.0 73.5 0.0 3.2 0.0 0.0 31.5 29.3 82.4 CS+Sm(MP) PSP 90.3 54.2 72.4 17.4 8.0 16.6 0.1 17.9 75.8 23.6 74.2 42.8 8.5 74.3 0.0 17.3 0.0 0.0 36.1 33.1 84.3 CS+VP PSP 91.5 54.2 74.8 23.8 7.4 18.3 3.0 13.7 76.9 24.8 66.5 48.6 22.2 82.1 35.7 19.7 28.2 6.4 42.7 39.0 85.6 CS+Sm(VP) PSP 93.4 63.4 76.4 27.3 11.7 23.6 15.8 23.6 78.2 32.0 78.4 52.2 26.2 84.0 33.4 33.4 30.1 18.1 42.4 44.4 87.4

Table 3: Training segmentation models using adapted images. We adapt synthetic MP and VIPER (VP) datasets to Cityscapes (CS) domain. Sm(MP) and Cy(MP) denote adaptation of all images in MP with our Sem-GAN and Cycle GAN respectively to the CS domain. ”only” refers to using images directly from that domain (without adaptation). We use two segmentation CNNs: ”FCN” is VGG-FCN8s and ”PSP” is PSPNet for segmentation evaluation. We also show per-class accuracy on the 19 common classes across the three datasets.
(a) CSMP
(b) MPCS
(c) MPCS
(d) VPCS
Figure 5: Effect of semantic dropout on image translation accuracy. Right: Detailed analysis of accuracies when translating each class.

4.6 State-of-the-Art Comparisons

In Tables 1 and 2, we compare Sem-GAN against three state-of-the-art image translators: (i) Cycle-GAN [56], (ii) VAE-GAN [25], and style-transfer using perceptual losses [20]. We also report performances with and without semantic dropout (SM). As is clear, Sem-GAN (+ SM) outperforms Cycle-GAN in almost all tasks, especially on the challenging mIoU criteria. Specifically, we find that on MPCS and ViperCS tasks, our scheme is nearly 20% better in classification accuracies. Similar results are observed on other tasks as well, except in MP(Snow)MP(Winter) translations. In this case, the source and target domains are inherently the same, except for simulated snow in the latter, which can be undone by the generator, thereby perfectly aligning the domains. In Figures 5(d) and 5(c), we analyze the per-class IoU for the MPCS task. Note that not all classes are present in our (randomly chosen) test set. We see that Sem-GAN almost always shows superior translations on most classes. In Figure 6, qualitative results are presented. On the MaskCS task, Sem-GAN guides the error from the segmenters to be improve the appearance of the generated segments as demonstrated by results in Tables 1 and 2, leading to better results than the other models.

(a) CS (orig)
(b) CS MP Cy-GAN
(c) CS MP Sem-GAN
(d) VP (orig)
(f) VPCS Sem-GAN
(g) Segment (M)ask
(h) MCS Cy-GAN
(i) MCS Sem-GAN
(j) MP (orig)
(k) MPCS
(l) CS (orig)
(m) CSMP
(n) MP (orig)
(o) MPCS
(p) CS (orig)
(q) CSMP
Figure 6: Qualitative results; (a–c) CS MP, (d–f) ViperCS, (g–i) segment mask CS, (j–m) NightDay, and (n–q) WinterSummer. More results in the supplementary material.

4.7 Improvements on Semantic Segmentation

Next, we analyze the merit of Sem-GAN for improving the original task, namely training semantic segmentation models via synthetic data. Our analysis is loosely based on [18], however using our datasets and evaluation models. We use 10K images from the two synthetic datasets and 200 images from the Cityscapes (CS) dataset. We translated the synthetic images (source) to the CS domain (as in [18]) and used source ground truth labels for training two segmentation models. We used a test set of 500 CS images for evaluating our models. All the algorithms are trained using SGD with a learning rate of 0.0001 for 50 epochs. As is clear from Table 5, Cycle GAN is sometimes seen to reduce the performance (e.g., Cy(VP)) against no adaptation likely due to the correspondence mismatch problems alluded to earlier. However, Sem-GAN improves image adaptation significantly compared to CycleGAN, and leads to more accurate segmentation models than when not using adaptation; e.g., ”CS only” with FCN8s is 19.9% mIoU, while using Sem-GAN i.e., CS+Sem(VP), this improves to 34.4%. Similarly, using PSPNet, ”CS only” to CS+Sem(VP) is 24.4% to 44.4%, a 20% improvement. Further, note that the improvement from CS+VP to CS+Sem(VP) is nearly 6%; the former is without any adaptation on VP images. More comparisons and results are available in the Supplementary material. The code for the paper will be made publicly available.

5 Conclusions

We presented an image-to-image translation framework that uses semantic consistency using segment class identities for achieving realistic translations. Modeling such a consistency as a novel loss, we presented an end-to-end learnable GAN architecture. We demonstrated the advantages of our framework on three datasets and six translation tasks. Our results clearly demonstrate that semantic consistency, as is proposed in this paper, is very important for ensuring the quality of the translation.



Appendix A Additional Details and Comparisons

As mentioned in the main paper, we use 19 semantic segment classes with respect to the Cityscapes dataset for training our Sem-GAN framework. These classes are as follows: 1. ’road’, 2.’sidewalk’, 3.’building’, 4. ’wall’, 5. ’fence’, 6. ’pole’, 7. ’traffic light’, 8. ’traffic sign’, 9. ’vegetation’, 10. ’terrain’, 11. ’sky’, 12. ’person’, 13. ’rider’, 14. ’car’, 15. ’truck’, 16. ’bus’, 17. ’train’, 18. ’motorcycle’, 19. ’bicycle’. Below, we provide the per-class IoU for the following tasks: ViperCS Figure 7, MPCS Figure 8, CS summer MP winter Figure 9, and Seg Image (CS) Figure 10.

(a) ViperCS
(b) CSViper
Figure 7: Per-class IoU scores on the ViperCS task.
(a) MPCS
(b) CSViper
Figure 8: Per-class IoU scores on the MPCS task.
(a) Summer(CS)Winter (MP)
(b) Winter (MP)Summer (CS)
Figure 9: Per-class IoU scores on the CS summerMP winter task.
(a) SegCS
Figure 10: Per-class IoU scores on the Segmentation mask Image (CS) task.

Appendix B Ablative Analysis

In Table 4, we provide an ablative study of the various elements in our framework. Interestingly, we find that adding the segmentation information into the translation process significantly improves the accuracy from Cycle-GAN, and ‘no cycle + seg’ is about 12% better than with cycle. This is perhaps because having segmentation information makes the translation process ‘easier’, while without that the cycle-GAN has to figure out the mapping between various segments automatically, which may lead to incorrect mappings. Adding cycle consistency still improves the performance, and seg+cycle+SM performs the best.444Note that, when we say ‘no cycle’, we mean that we do not use both the cycle-consistency and the identity constraint, as in the implementation of Cycle-GAN.

width=1 component mean Acc. mIoU mean Acc. mIoU Cycle GAN 23.6 9.2 21.0 13.4 No Cycle + With Seg 35.6 22.5 25.9 19.3 Cycle + Seg 38.8 24.0 27.4 20.4 Cycle + Seg + SM 42.5 28.4 27.7 21.5

Table 4: Ablative study of the influence of various components in our model on the Viper2CS task. Left part of the table is the translation from ViperCS and the right side is CSViper. All numbers are in %.

width=1 Method Architecture Road sidewalk Building wall fence pole tra. light tra. sign veg. terrain sky person rider car truck bus train mo. cycle bicycle mIoU Ov. Acc Fq.W. Acc CS only F 85.1 38.9 60.6 0.8 1.1 0.0 0.0 0.0 65.1 7.4 36.3 19.8 0.0 62.7 0.0 0.0 0.0 0.0 0.4 19.9 77.4 64.1 CS + MP F 87.0 41.8 64.6 14.5 0.2 0.6 0.1 0.4 68.6 11.7 67.3 11.2 0.0 63.6 0.0 0.0 0.0 0.0 0.1 22.7 79.3 66.8 CS+Cy(MP) F 85.5 40.3 63.6 6.9 0.0 3.2 0.4 3.5 69.0 7.8 52.2 11.7 0.0 62.5 0.0 0.0 0.0 0.0 2.6 21.5 77.8 65.6 CS+Sm(MP) F 88.1 47.0 67.8 12.8 0.5 7.1 0.0 2.0 71.1 10.0 69.0 15.4 0.0 67.6 0.0 0.0 0.0 0.0 4.0 24.3 80.8 69.1 CS + VP F 90.2 54.1 70.3 22.6 5.9 3.4 0.0 3.4 73.4 27.2 67.1 31.4 0.3 73.2 9.1 18.7 5.6 0.2 21.2 30.4 83.2 72.6 CS+Cy(VP) F 90.3 53.2 67.4 9.4 15.3 2.6 0.1 4.8 70.6 26.4 60.9 36.4 0.9 74.2 21.1 24.9 3.4 5.0 30.4 31.4 82.8 71.9 CS+Sm(VP) F 92.1 59.1 71.3 21.6 19.1 4.4 0.2 5.6 74.1 30.2 70.1 36.4 1.3 76.8 24.2 20.5 11.7 4.3 30.7 34.4 84.6 74.9 CS only P 85.2 35.2 62.9 4.1 15.4 0.5 0.0 2.6 68.6 24.3 49.0 27.2 0.0 63.6 0.0 4.8 0.0 0.0 19.7 24.4 78.7 65.7 MP only P 51.2 12.9 40.7 4.6 0.0 4.8 0.2 9.3 50.5 2.6 10.3 0.0 0.0 59.5 0.0 0.0 0.0 0.0 0.0 13.0 56.0 42.0 Cy(MP) P 83.2 32.2 49.3 7.3 0.0 5.1 0.8 14.0 43.9 10.0 28.1 0.0 0.0 50.5 0.0 0.0 0.0 0.0 0.0 17.1 69.1 56.4 Sm(MP) P 85.4 40.9 63.8 11.9 0.0 8.5 1.2 10.0 69.9 13.9 64.0 0.0 0.0 63.2 0.0 0.0 0.0 0.0 0.0 22.8 78.2 66.0 CS+MP P 88.8 49.5 70.2 7.0 4.7 9.0 0.0 13.4 72.8 20.2 74.9 38.2 0.0 73.5 0.0 3.2 0.0 0.0 31.5 29.3 82.4 71.7 CS+Cy(MP) P 89.8 51.3 71.3 14.1 4.2 11.2 0.7 17.9 73.3 23.7 63.5 39.2 0.7 73.2 0.0 7.6 0.0 0.0 34.7 30.3 72.6 83.3 CS+Sm(MP) P 90.3 54.2 72.4 17.4 8.0 16.6 0.1 17.9 75.8 23.6 74.2 42.8 8.5 74.3 0.0 17.3 0.0 0.0 36.1 33.1 84.3 74.1 VP only P 75.9 29.4 49.4 0.0 0.0 0.0 0.6 10.8 65.2 13.2 62.5 15.7 0.0 58.3 12.7 6.2 0.0 0.1 0.0 21.1 71.8 57.8 Cy(VP) P 85.2 35.2 53.6 0.0 4.7 0.0 4.3 9.6 22.3 15.3 20.3 11.3 0.0 66.4 11.8 3.0 0.0 4.1 0.0 18.3 70.9 56.8 Sm(VP) P 87.9 37.0 56.5 0.0 4.5 0.0 2.2 15.6 42.6 24.8 40.4 20.0 0.0 74.1 19.9 14.7 0.0 4.6 0.0 23.4 76.2 62.8 CS+VP P 91.5 54.2 74.8 23.8 7.4 18.3 3.0 13.7 76.9 24.8 66.5 48.6 22.2 82.1 35.7 19.7 28.2 6.4 42.7 39.0 85.6 76.3 CS+Sm(VP) P 93.4 63.4 76.4 27.3 11.7 23.6 15.8 23.6 78.2 32.0 78.4 52.2 26.2 84.0 33.4 33.4 30.1 18.1 42.4 44.4 87.4 79.1

Table 5: Training segmentation models using adapted images. We adapt synthetic MP and VIPER (VP) datasets to Cityscapes (CS) domain. Sm(MP) and Cy(MP) denote adaptation of all images in MP with SemGAN and Cycle GAN respectively to the CS domain. ”only” refers to using images directly from that domain (without adaptation). We use two segmentation CNNs: ”F” is VGG-FCN8s and ”P” is PSPNet.

Appendix C When Ground Truth Masks are Unavailable

As alluded to in the main paper, we do not necessarily require the ground truth semantic masks for our scheme to work – instead we only need to have a segmentation model for the respective domains. To this end, we experiment this facet of our scheme on the task of translating ’horses’ to ’zebras’ using the dataset provided with Cycle-GAN [56]. There are about 1300 images of horses and zebras in this dataset. For the segmentation models we use an FCN network trained on the MSCOCO dataset that has 80 semantic classes including ‘horse’ and ’zebra’. We do not train these models within our Sem-GAN setup. Qualitative results from this experiment are provided in Figure 11. To ensure the translations are cross-domain, that is the source is, say the ’horse’ class and the target is the ’zebra’ class, for defining the consistency criteria, we switch the labels of the source segmenter (which in this case will identify ’horse’) to ’zebra’, and vice versa for the other translation direction. For this task, we trained both Cycle GAN and Sem GAN for 200 epochs. We used a 9-block ResNet for the generator.

A point to be noted in this task is that, while the results of both Cycle-GAN and Sem-GAN are more or less similar, the translation with Sem-GAN is slightly better (qualitatively) when multiple classes are present in the images – such humans (see for example, the last two rows in Figure 11). This is because, the MS-COCO segmentation dataset includes a ‘person’ class as well. While, the results seem better, there still remains a lot to improve; especially to get the structure of the objects within a segment.

Appendix D Additional Results on Semantic Segmentation Task

In addition to the results in Table 3 in the main paper, in Table 5 we provide additional results on the semantic segmentation task using synthetic images (translated using Cycle-GAN or Sem-GAN)) for training segmentation models. The additional results are for segmentation models trained only on translated synthetic images (not using real images from the domain or their ground truths) – such as Cy(VP) and Sm(VP). Interestingly, we find that using only Sm(VP) is better than using VP only (21.1 against 23.4%) and using MP only to Sm(MP) is increased from 13.0 to 22.8% in mIoU clearly demonstrating that our Sem-GAN leads to much better domain adaptations than using the synthetic images directly. We also see that Cy(MP) and Cy(VP) are inferior in performance.

Appendix E Qualitative Results

From Figure LABEL:fig:mms2cs_quals onwards, we provide additional qualitative results on the tasks we described in the main paper.

(a) HorseZebra
(b) Cycle GAN
(c) Sem GAN
(d) ZebraHorse
(e) Cycle GAN
(f) Sem GAN
(g) HorseZebra
(h) Cycle GAN
(i) Sem GAN
(j) HorseZebra
(k) Cycle GAN
(l) Sem GAN
Figure 11: Translation from ‘horse’ to ‘zebra’. Here we use segmentation model trained on MS-COCO dataset as the images in the task does not come with their semantic labels.
(a) Syn(Viper) CS
(b) Cycle GAN
(c) Sem GAN
(d) Syn(Viper) CS
(e) Cycle GAN
(f) Sem GAN
(g) Syn(Viper) CS
(h) Cycle GAN
(i) Sem GAN
(j) Syn(Viper) CS
(k) Cycle GAN
(l) Sem GAN
Figure 12: Translation from Synthetic (Viper) to Real (CS). We show the viper image (left), the translation by Cycle GAN (middle) and that by Sem GAN (right).
(a) Segimage (CS)
(b) Cycle GAN
(c) Sem GAN
(d) Segimage (CS)
(e) Cycle GAN
(f) Sem GAN
(g) Segimage (CS)
(h) Cycle GAN
(i) Sem GAN
(j) Segimage (CS)
(k) Cycle GAN
(l) Sem GAN
Figure 13: Translation from segmentation mask to Real (cityscapes). We show the segmask (left), the translation by Cycle GAN (middle) and that by Sem GAN (right).
(a) CS(day) night
(b) Sem GAN
(c) CS(day) night
(d) Sem GAN
(e) CS(day) night
(f) Sem GAN
(g) CS(day) night
(h) Sem GAN
Figure 14: Translation from cityscapes real image (left) to MP synthetic night image (right).
(a) Syn(night) CS(day)
(b) Sem GAN
(c) Syn(night) CS(day)
(d) Sem GAN
Figure 15: Translation from MP synthetic night image (left) to cityscapes real image (right).
(a) Real(CS)Winter
(b) Synthetic Winter
(c) Real(CS)Winter
(d) Synthetic Winter
(e) Winter(MP)CS
(f) Sem GAN
(g) Winter(MP)CS
(h) Sem GAN
Figure 16: Rows 1–2: Translation from cityscapes real image (left) to MP synthetic winter image (right). Rows 3–4: Translation from MP synthetic winter image (left) to real (CS) domain.