Most existing methods for conditional image synthesis are only able to generate a single plausible image for any given input, or at best a fixed number of plausible images. In this paper, we focus on the problem of generating images from semantic segmentation maps and present a simple new method that can generate an arbitrary number of images with diverse appearance for the same semantic layout. Unlike most existing approaches which adopt the GAN framework, our method is based on the recently introduced Implicit Maximum Likelihood Estimation framework. Compared to the leading approach, our method is able to generate more diverse images while producing fewer artifacts despite using the same architecture. The learned latent space also has sensible structure despite the lack of supervision that encourages such behaviour.READ FULL TEXT VIEW PDF
Conditional image synthesis is a problem of great importance in computer vision. In recent years, the community has made great progress towards generating images of high visual fidelity on a variety of tasks. However, most proposed methods are only able to generate a single image given each input, even though most image synthesis problems are ill-posed, i.e.: there are multiple equally plausible images that are consistent with the same input. Ideally, we should aim to predict a distribution of all plausible images rather than just a single plausible image, which is a problem known asmultimodal image synthesis . This problem is hard for two reasons:
Model: Most state-of-the-art approaches for image synthesis use generative adversarial nets (GANs) , which suffer from the well-documented issue of mode collapse. In the context of conditional image synthesis, this leads to a model that generates only a single plausible image for each given input regardless of the latent noise and fails to learn the distribution of plausible images.
Data: Multiple different ground truth images for the same input are not available in most datasets. Instead, only one ground truth image is given, and the model has to learn to generate other plausible images in an unsupervised fashion.
In this paper, we focus on the problem of multimodal image synthesis from semantic layouts, where the goal is to generate multiple diverse images for the same semantic layout. Existing methods are either only able to generate a fixed number of images  or are difficult to train  due to the need to balance the training of several different neural nets that serve opposing roles.
To sidestep these issues, unlike most image synthesis approaches, we step outside of the GAN framework and propose a method based on the recently introduced method of Implicit Maximum Likelihood Estimation (IMLE) . Unlike GANs, IMLE by design avoids mode collapse and is able to train the same types of neural net architectures as generators in GANs, namely neural nets with random noise drawn from an analytic distribution as input.
This approach offers two advantages:
Unlike , which requires the simultaneous training of three neural nets that serve opposing roles, our model is much simpler: it only consists of a single neural net. Consequently, training is much more stable.
Most modern image synthesis methods are based on generative adversarial nets (GANs) . Most of these methods are capable of producing only a single image for each given input, due to the problem of mode collapse. Various work has explored conditioning on different types of information. Various methods condition on a scalar that only contains little information, such as object category and attribute [23, 9, 5]. Other methods condition on richer labels, such as text description , surface normal maps , previous frames in a video [22, 31] and images [34, 13, 37]. Some methods only condition on inputs images in the generator, but not in the discriminator [25, 19, 36, 20]. [15, 26, 28] explore conditioning on attributes that can be modified manually by the user at test time; these methods are not true multimodal methods because they require manual changes to the input (rather than just sampling from a fixed distribution) to generate a different image.
Another common approach to image synthesis is to treat it as a simple regression problem. To ensure high perceptual quality, the loss is usually defined on some transformation of the raw pixels. This paradigm has been applied to super-resolution[1, 14], style transfer  and video frame prediction [30, 24, 8]. These methods are by design unimodal methods because neural nets are functions, and so can only produce point estimates.
Various methods have been developed for the problem of image synthesis from semantic layouts. For example, Karacan et al.  developed a conditional GAN-based model for generating images from semantic layouts and labelled image attributes. It is important to note that the method requires supervision on the image attributes and is therefore a unimodal method. Isola et al.  developed a conditional GAN that can generate images solely from semantic layout. However, it is only able to generate a single plausible image for each semantic layout, due to the problem of mode collapse in GANs. Wang et al.  further refined the approach of , focusing on the high-resolution setting. While these methods are able to generate images of high visual fidelity, they are all unimodal methods.
A simple approach to generate a fixed number of different outputs for the same input is to use different branches or models for each desired output. For example,  proposed a model that outputs a fixed number of different predictions simultaneously, which was an approach adopted by Chen and Koltun  to generate different images for the same semantic layout. Unlike most approaches,  did not use the GAN framework; instead it uses a simple feedforward convolutional network. On the other hand, Ghosh et al.  uses a GAN framework, where multiple generators are introduced, each of which generates a different mode. The above methods all have two limitations: (1) they are only able to generate a fixed number of images for the same input, and (2) they cannot generate continuous changes.
A number of GAN-based approaches propose adding learned regularizers that discourage mode collapse. BiGAN/ALI [6, 7] trains a model to reconstruct the latent code from the image; however, when applied to the conditional setting, significant mode collapse still occurs because the encoder is not trained until optimality and so cannot perfectly invert the generator. VAE-GAN  combines a GAN with a VAE, which does not suffer from mode collapse. However, image quality suffers because the generator is trained on latent code sampled from the encoder/approximate posterior, and is never trained on latent code sampled from the prior. At test time, only the prior is available, resulting in a mismatch between training and test conditions. Zhu et al.  proposed Bicycle-GAN, which combines both of the above approaches. While this alleviates the above issues, it is difficult to train, because it requires training three different neural nets simultaneously, namely the generator, the discriminator and the encoder. Because they serve opposing roles and effectively regularize one another, it is important to strike just the right balance, which makes it hard to train successfully in practice.
predict a discretized marginal distribution over colours of each individual pixel. While this approach is able to capture multimodality in the marginal distribution, ensuring global consistency between different parts of the image is not easy, since there are correlations between the colours of different pixels. This approach is not able to learn such correlations because it does not learn the joint distribution over the colours ofall pixels.
Given a semantic segmentation map, our goal is to generate arbitrarily many plausible images that are all consistent with the segmentation. More formally, given a segmentation map where is the size of the image and is the number of semantic classes, the goal is to generate a plausible color image that is consistent with . Each pixel in the segmentation map
is represented as a one-hot encoding of the semantic category it belongs to, that is
We consider the conditional probability distribution. A plausible image that is consistent with is a mode of this distribution; because there could be many plausible images that are consistent with the same segmentation, is usually multimodal. A method that performs unimodal prediction can be seen as producing a point estimate of this distribution. The ability to generate a high-quality image essentially corresponds to the ability to return an image that is close to some mode.
Because our goal is to generate multiple plausible images, producing a point estimate of is not enough. Instead, we need to model the full distribution.
We model using a probabilistic model with parameters (we will describe what this model looks like later). We will hereafter denote the distribution represented by the model as . Training the model is equivalent to estimating ; a standard method for parameter estimation is maximum likelihood estimation (MLE). That is, we want to maximize the log-likelihood of the ground truth image that corresponds to the semantic layout . Let denote the training set, we’d like to train the model by optimizing the following objective:
The probabilistic model that we use is an implicit probabilistic model, which is defined directly in terms of a sampling procedure. This contrasts with classical probabilistic models (sometimes known as prescribed probabilistic models), which are defined in terms of probability density functions (PDFs). Our implicit model is defined in terms of the following sampling procedure:
Return as a sample
represents a deep neural network, which takes the label mapand random vector as input and outputs the synthesized image . In other words, the model is the same as the generator in conditional GANs (but it does not have a discriminator and will not be trained using the GAN objective).
It is not possible to train an implicit model using MLE because log-likelihood cannot be expressed in closed form or evaluated numerically. Fortunately, recently Li and Malik  introduced Implicit Maximum Likelihood Estimation (IMLE), a method for training probabilistic models that does not need to compute the actual log-likelihood, but is equivalent to maximum likelihood under appropriate conditions.
More formally, given a set of training examples and an (unconditional) implicit probabilistic model , IMLE draws i.i.d. samples and optimize the parameters such that each data example is close to its nearest sample in expectation. It can be written as the optimization problem:
To apply IMLE to conditional image synthesis, we need to model all the different distributions for different semantic layouts . Therefore, in the conditional setting, the samples corresponding to different different ’s should be segregated, and the nearest neighbour search should be over only the samples that correspond to the segmentation associated with the ground truth. We also use a different distance metric , which is defined in Section 3.3. The modified algorithm is stated in Algorithm 1. The size of the random batch , the number of random vectors , the number of inner iterations , the size of minibatch and the learning rate
To allow for direct comparability to Cascaded Refinement Networks (CRN) , which is the leading method for multimodal image synthesis from semantic layouts, we use the same architecture as CRN, with minor modifications to convert CRN into an implicit probabilistic model. We will first review CRN in 3.3.1 and discuss our improvements in 3.3.2.
The Cascaded Refinement Network is a coarse-to-fine architecture that consists of multiple modules . Each module operates at one resolution and the resolution is doubled from one module to the next. The first module operates at , and thus the resolution for module is . All layers in the same module operate at the same resolution.
takes the semantic segmentation map (downsampled to ) as input and produces a feature output . All other modules take the concatenation of the semantic map (downsampled to ) and feature output (upsampled to ) as input and produces a feature output
. Note that bilinear interpolation is used for upsampling/downsampling. The final module is followed by aconvolutional layer that outputs the synthesized image with 3 channels.
Inside each module , there are two
convolutional layers with layer normalization and leaky ReLU activation. The number of channels foris 1024 for to , 512 for and , 128 for and 32 for .
CRN uses a perceptual loss function based on VGG-19 features. Given the ground truth image and synthesized image , the loss function is:
Here represents the feature outputs of the following layers in VGG-19: ’conv1_2’, ’conv2_2’, ’conv3_2’, ’conv4_2’, and ’conv5_2’. Hyperparameters are set such that the loss of each layer makes the same contribution to the total loss. We use this loss function as the distance metric in the IMLE objective.
The original CRN synthesizes only one image for the same semantic layout input. To generate multiple images for the same input, Chen and Koltun  adopted the approach of  and increased the number of output images from 1 to . This allows the generation of different samples for the same input, but the number is fixed. As a result, if the number of modes is greater than , some modes will be missing from the prediction.
We adopt a different approach for modelling the multimodality. Instead of increasing the number of output channels, we add additional input channels to the architecture and feed random noise via these channels. This new model can be then interpreted as an implicit probabilistic model, which we can train using conditional IMLE.
We incorporate random noise by concatenating the semantic label map with a random vector reshaped to the appropriate size. is of size and hence should have size where is the number of noise channels. Let . now takes (downsampled to ) as input and the other modules take the concatenation of (downsampled to ) and feature output (upsampled to ) as input. Consequently, the only architectural change is to the first layer of each module, where the number of input channels increases by .
Because the input segmentation maps are provided at a high resolution, a noise input of size could be very high-dimensional. This can require generating many samples during training, which can be slow. To solve this issue, we propose forcing the noise to lie on a low-dimensional manifold, which improves the sample efficiency. To this end, we add a noise encoder module, which is a 3-layer convolutional network that takes and noise sampled from a Gaussian as input and outputs an encoded noise vector of size . We replace the original with the encoded and leave the rest of the architecture unchanged.
In practice, we found datasets can be strongly biased towards objects with relatively common appearance. As a result, naïve training can result in limited diversity among the images generated by the trained model. To address this, we propose two strategies to rebalance the dataset and loss.
We first rebalance the dataset to increase the chance of rare images being sampled when populating (as shown in Algorithm 1). To this end, for each image in the training set, we calculate the average pixel vector of each semantic class in that image. More concretely, we compute the following for each image :
For each category , we consider the set of average pixel vectors for that category in all training images, i.e.:
. We then fit a Gaussian kernel density estimate to this set and obtain an estimate of the distribution of average pixels of category. Let denote the estimated probability density function (PDF) for category . Given the -th training example, we define the rarity score of category :
We allocate a portion of the batch in Algorithm 1 to each of the top five categories that have the largest overall area across the dataset. For each category, we sample training images based on the rarity score and effectively upweight images containing objects with rare appearance. The rationale for selecting the categories with the largest areas is because they tend to appear more frequently and be visually more prominent. If we were to allocate a fixed portion of the batch to rare categories, we would risk overfitting to images containing those categories.
The same training image can contain both common and rare objects. Therefore, we modify the loss function so that the objects with rare appearance are upweighted. For each training example , we define a rarity score mask :
We then normalize so that every entry lies in :
The mask is then applied to the loss function (1) and the new loss becomes:
Here is the rarity score mask downsampled to match the size of
The choice of dataset is very important for multimodal conditional image synthesis. The most common dataset in the unimodal setting is the Cityscapes dataset . However, it is not suitable for the multimodal setting because most images in the dataset are taken under similar weather conditions and time of day and the amount of variation in object colours is limited. This lack of diversity limits what any multimodal method can do. On the other hand, the GTA-5 dataset , has much greater variation in terms of weather conditions and object appearance. To demonstrate this, we compare the colour distribution of both datasets and present the distributiion of hues of both datasets in Figure 1. As shown, Cityscapes is concentrated around a single mode in terms of hue, whereas GTA-5 has much greater variation in hue. Additionally, the GTA-5 dataset includes more 20000 images and so is much larger than Cityscapes.
We train our model on 12403 training images and evaluate on the validation set (6383 images). Due to computational resource limitations, we conduct experiments at the resolution. We add 10 noise channels and set the hyperparameters shown in Algorithm 1 to the following values: , , , and .
The leading method for image synthesis from semantic layouts in the multimodal setting is the CRN  with diversity loss that generates nine different images for each semantic segmentation map and is the baseline that we compare to.
Quantitative comparison aims to quantitatively compare the diversity as well as quality of the images generated by our model and CRN.
Our method is able to generate an arbitrary number of different images for the same input by simply feeding in different random noise vectors. However, the baseline can only generate nine images for the input. To allow for direct comparison, we use our model to generate 100 images for each semantic layout in the test set, we then use -means to divide the generated images into 9 clusters. Then we randomly pick one image from each cluster and compare the nine selected images with the nine images generated by CRN.
Then, for each method, we compute the variance over the nine images of all pixels that belong to a particular category and take the average over all spatial locations and colour channels. This yields an average variance for each category in each image. Next, we take the average of the average variance over the entire test set and obtain the mean variance for each category. Since the nine images generated by our model are randomly generated, we repeat this procedure 10 times to reduce stochasticity. Results are shown in Table1.
We now evaluate the generated image quality by human evaluation. Since it is difficult for humans to compare images with different styles, we selected the images that are closest to the ground truth image indistance among the images generated by CRN and our method. We then asked 62 human subjects to evaluate the images generated for 20 semantic layouts. For each semantic layout, they were asked to compare the image generated by CRN to the image generated by our method and judge which image exhibited more obvious synthetic patterns. The result is shown in Table 2.
|Semantic Class||Road||Sidewalk||Building||Wall||Fence||Pole||Traffic light||Traffic sign||Vegetation||Terrain|
|Our model ()||8.804||10.41||6.362||5.500||1.901||2.534||1.168||1.703||1.716||5.018|
|Our model ()||1.645||2.103||0.2772||1.390||1.872||0.3267||0.01217||0.1878||0.01628||3.415|
|% of Images Containing More Artifacts|
A qualitative comparison is shown in Fig. 2 where results generated by our model are obviously more diverse. Our method also generates fewer artifacts compared to CRN, which is especially interesting because the architecture and the distance metric are the same. As shown in Fig. 6, the images generated by CRN has grid-like artifacts which are not present in the images generated by our method. More examples generated by our model are shown in Fig. 3.
We also perform linear interpolation of noise vectors to evaluate the quality of the learned latent space of noise vectors. As shown in 4(a), by interpolating between the noise vectors corresponding to generated images during daytime and nighttime respectively, we obtain a smooth transition from daytime to nighttime. We also show the transition in car colour in 4(b). This suggests that the learned latent space is sensible and captures the variation along both the daytime-nightime axis and the colour axis. More examples and animations are available in the supplementary materials.
A successful method for image synthesis from semantic layouts enables users to manually edit the semantic map to synthesize desired imagery. One can do this simply by adding/deleting objects or changing the class label of a certain object. In Figure 7 we show several such changes. Note that all four inputs use the same random vector; as shown, the images are highly consistent in terms of style, which is quite useful because the style should remain the same after editing the layout. We further demonstrate this in Fig. 5 where we apply the random vector used in (a) to vastly different segmentation maps in (b),(c),(d),(e) and the sunset style is preserved across the different segmentation maps.
We presented a new method based on IMLE for multimodal image synthesis from semantic layout. Unlike prior approaches, our method can generate arbitrarily many images for the same semantic layout and is easy to train. We demonstrated that our method can generate more diverse images with fewer artifacts compared to the leading approach , despite using the same architecture. In addition, our model is able to learn a sensible latent space of noise vectors without supervision. We showed that by taking the interpolations between noise vectors, our model can generate continuous changes. At the same time, using the same noise vector across different semantic layouts result in images of consistent style.
Automatic image colorization via multimodal predictions.In European conference on computer vision, pages 126–139. Springer, 2008.
The cityscapes dataset for semantic urban scene understanding.In
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester, 2014(5):2, 2014.
International conference on machine learning, pages 843–852, 2015.
We generated a video that shows smooth transitions between different renderings of the same scene. Frames of the generated video are shown in Figure 8.
We generated videos of a car moving farther away from the camera and then back towards the camera by generating individual frames independently using our model with different semantic segmentation maps as input. For the video to have consistent appearance, we must be able to consistently select the same mode across all frames. In Figure 9, we show that our model has this capability: we are able to select a mode consistently by using the same latent noise vector across all frames.
Here we demonstrate one potential benefit of modelling multiple modes instead of a single mode. We tried generating a video from the same sequence of scene layouts using pix2pix , which only models a single mode. (For pix2pix, we used a pretrained model trained on Cityscapes, which is easier for the purposes of generating consistent frames because Cityscapes is less diverse than GTA-5.) In Figure 10, we show the difference between adjacent frames in the videos generated by our model and pix2pix. As shown, our model is able to generate consistent appearance across frames (as evidenced by the small difference between adjacent frames). On the other hand, pix2pix is not able to generate consistent appearance across frames, because it arbitrarily picks a mode to generate and does not permit control over which mode it generates.