Image-to-image translation has been made much progress due to the successful of image generation and synthesizing. As a sub research topic of image translation, image-to-image translation task need the model not only to understand the image contents but also to convert the source images to the target images under some certain rules. Many image processing problems such as semantic segmentation, color restoration, etc. can be considered as image-to-image work. Since Generative Adversarial Network (GAN) was presented by Goodfellow in 2014, GAN is proved to be very efficient in image generation and synthesizing tasks in many research fields. Soon researchers show that the extension of GAN can not only generate images from random noise , but also achieve the image-to-image translation between different domains . It’s highly desirable if we could build a model which is able to create realistic images based on some simple graphics (e.g. semantic layouts). Actually, lots of works about GAN prove that GAN is a good candidate to handle the image-to-image translation work. But GAN based models are still will weak on precise generation tasks. How to organize the translated image contents precisely and enhance the generated image equality is still a challenging work. Take Fig.1 as an example, it would be a challenging task if we want to create realistic overlapped cars only based on an adhesive semantic layout. In order to achieve this goal, the image-to-image translation model should be smart enough to understand the relationship and the position of each car in the space domain.
In this paper, we proposed a novel idea for generating realistic images from the semantic layout precisely. We take advantages from pix2pix, Cascade Refinement Network (CRN) and Inception-ResNet to design our model  
. The cascade layer loss from CRN can efficiently control the contents organizing while the adversarial structure can generate realistic sharp samples. It’s believed that the residual blocks can be used to build very deep neural networks. It can also help alleviate the gradient vanish and gradient explode. We also consider the skip connection for the encoder-decoder network, which is derived from the U-net. Specially, we add skip connections between the convolution layers before residual blocks and deconvolution layers after the residual blocks.
In order to generate the complex objects precisely, we compute the instance map of complex objects (such as the cars in the Cityscapes and the furniture in the NYU  datasets) from the semantic layouts. And we concatenate the instance car map with the raw semantic layout to provide additional information to our model. Inspired by InfoGAN, we develop a denoising framework by adding some auxiliary random noise to the input images. Our model will try to remove the noise during the generation process and synthesis a clear image. This operation can enhance the model stochasticity and increase the robustness in image generation stage.
To further improve the generation, we follow the perturbed loss defined by . The perturbed loss can improve the ability of the discriminator and push the generator to create more realistic images. The detail of our framework is displayed in Fig. 2. In this paper we make the following contributions:
We develop a new framework to including residual blocks, cascade layer loss and perturbed loss to synthesis images from a semantic layout to a realistic RGB image.
We develop a denoising process enhance the model stochasticity and increase the robustness on image-to-image translation stage.
In order to generate complex objects precisely, we include an instance map to force the generator focus on the confusion parts and improve the final generation performance.
Ii Related work
Ii-a Residual networks
Residual networks, which are developed by 
, is proved to be efficient on image classification tasks. Some computer vision tasks such as image translation and image editing also use residual blocks as a strong feature representation architecture. Residual networks are neural networks in which each layer consists of a residual block and a skip connection bypassing . Layers in residual networks include multiple convolution layers. The residual blocks build connections between different layers
Let and denote input and output of the th residual block separately. Where
is some sequence of convolutions, batch normalization
Ii-B Image-to-image translation
There exists a large body of work on supervised representation learning. Early methods were based on the stacked auto-encoders or restricted Boltzman machines. Researchers use pairwise supervision to achieve the image feature representation using the auto-encoders
. And the Image-to-image translation problems are often regarded as feature representation and feature matching work. The researchers have already taken significant steps in the computer vision tasks, while deep convolution neural networks(CNNs) becoming the common workhorse behind a wide variety of image processing problems. For the style translation, Gatys et al applied a VGG-16 network to extract features. Other methods based on CNNs use deep neural networks to capture feature representations of images and applied them to other images
. They try to capture the feature spaces of scattering networks and proposed the use of features extracted from a pre-trained VGG network instead of low-level pixel-wise error measures.
Generative adversarial networks (GANs) aim to map the real image distribution by forcing the synthesized samples to be indistinguishable from natural images. The discriminator try to differentiate synthesized image from real images while the generator aims to generate plausible images as possible to fool the discriminator. The adversarial loss has been a popular choice for many image-to-image tasks.
Recent research related with GANs are mostly based on the work of DCGAN (deep convolutional generative adversarial network). DCGAN has been proved to learn good feature representation from image pixels in many research. And the deep architecture has shown fantastic effectiveness of synthesizing plausible images in the adversarial networks. And conditional generative adversarial networks (cGAN) which developed by  offers a solution to generate images with auxiliary information. Previous works have conditioned GANs on discrete labels, text and images. The application of conditional GANs on images processing contains the inpainting , image manipulation guided by user constrains , future frame prediction , and so on. Researchers applied the adversarial training to the image-to-image translation, in which we use the images from one domain as inputs to get translated images of another domain . The recent famous method for image-to-image translation about GAN is pix2pix. It consists of a generator and a discriminator . For image translation task, the objective of the generator is to translate input images to plausible images in another domain, while the discriminator aims to distinguish real images from the translated images. The supervised framework can build the mapping function between different domains. Towards stable training of GAN, Wasserstein generative adversarial network (WGAN) replace Jensen-Shannon divergence by Wasserstein distance as optimization metric, recently some other researchers improved the training using gradient penalty method..
Iii Proposed methods
In this paper, we use the residual blocks and convolution neural networks as the generators. Inspired by the U-net, we use the skip connections between feature maps in the generator to build the deep generator network. The residual block can help alleviate the gradient vanishing and gradient explosion problems. In order to generate more photorealistic images and balance the training, we have added some noise to the input images and found the noise can help improve the ability of generator.
Iii-a Architecture of our models
As described in Fig. 2, we use three convolution-Batchnorm-ReLu to get smaller size feature map, then we use nine residual blocks to capture feature representations. Following these residual blocks, we use two deconvolution layers to get high resolution images. We use the Batchnorm layers and Leaky ReLU after both the convolution operation and deconvolution layers. The final outputs have been normalized from -1 to 1 using the Tanh activation to generate a image outputs. We build the skip connections between the convolution layers and deconvolution layers to allow low-level information transmit. Such a network requires that all information flow pass through all the layers, including the bottleneck.
All the filters in convolutions have kernel size 4 and stride 2. The residual blocks are using kernel size 3 and stride 1. To learn the difference between fake and real components, we add the perturbed loss as a regularization scheme. The Fig.2 shows the architecture of our model. Let denotes the convolution layer have 64 convolution filters with kernel size 4 and stride 2.
We use the Adam optimizer of learning rate 0.0002 for both discriminator and generator. All the experiments have taken 200 epochs. And following the pix2pix method, we have applied random jittering by resizing theinput images to and randomly cropped back to size . And the number of our model parameter is about one fourth as pix2pix and one eighth as Cascade Refinement Network (CRN) . We use less parameter to get better results.
Iii-B Perturbed loss
The original GAN loss is expressed as
Let denotes the semantic input and denotes the noise. As a result of concatenating the input image and generated images in the Fig. 2, shows the predict output of the discriminator. Ideally and when G(x) denotes the generated samples. To make the discriminator more competitive, we use the perturbed training to improve the generations. Following Gulrajani and Denton et al.’s work  , we define the perturbed loss as:
where represents the mixture of the synthesized fake image and the target image . Only a portion of is from target image. And
is a random number from 0 to 1 following an uniform distribution. Here we can regardas perturbed sample. The object of an perturbed sample indicates which we changes the output of the generated sample in a desired way. We add this perturbed loss to improve the training of GANs and the quality of synthesized samples. We compute the perturbed loss . The computes the distance between fake images and target images and encourage the discriminator to distinguish different components of the mixed images. It can help improve the ability of discriminator to distinguish the fake images.
Iii-C Noisy adding and one-hot instance map
, how to enhance the stochasticity of their pix2pix method is an important question. We increase the stochasticity of our model by adding the Gaussian noise to the input images. And we found the additional noise can actually help the generator synthesize images stably. And we have taken experiments using various standard deviations to try to found out the influence the noise.
As mentioned in Fig. 1, it’s a challenging work when we want to create a realistic image only based on a semantic label map while the cars are overlapped. We compute the instance map of cars from the semantic input and concatenate them as the final input. We hope this operation can force our generator to focus on the boundary and structure information of overlapped objects. First we extract the car segmentation scene and make the background black. And the Fig. 3 shows how we got the instance map from the semantic labeled images and how we added the noise. Please note that we also concatenate the instance map and generated images for the discriminator.
Iii-D Cascade layer loss
In order to make the synthesized images more clear and perform better, we consider Chen et al.’s work and include an extra pre-trained VGG-19 network called CRN to provide the cascade loss in our hybrid discriminator. Fig. 2
shows the detail of cascade loss. The feature maps of convolutional networks can express some important information of images. The CRN can provide extra criteria for our model to distinguish real images from fake samples. The cascade loss is considered as a measurement of similarity between the target and the output images. We initialize the network with the weights and bias of the pre-trained VGG network on the Imagenet dataset. Each layer of the network will provide cascade loss between the real target images and the synthesized images. We use the first five convolutional layers as a component of our discriminator.
Following the definition mentioned in Eq. , here denotes the target image and is the image produced by the generator ; is the parameters of the generator ; is the cascade response in the level in the CRN. We choose the first 5 convolutional layers in VGG-19 to calculate the cascade loss. So we have . Please note the loss mentioned in Eq. 5 is only used to train the parameter of the generator G. The CRN is a pre-trained network. The weights of CRN will not be changed during we train the generator . The parameter controls the influence of cascade loss in the layer of CRN.
The cascade loss can provide the ability to measure the similarity between the output and the target images under different scales and enhance the ability of discriminator. It’s believed that the cascade loss from the higher layers control the global structure; the loss from the lower layers control the local detail during generation. Thus, the generator should provide better synthesized images to cheat the hybrid discriminator and finally improve the synthesized image quality.
The goal is to learn a generator distribution over data that matches the real data distribution by transforming an observed input semantic image and a random Gaussian noise into a sample . This generator is trained by playing against an adversarial discriminator that aims to distinguish between generated samples from the true data distribution (real image pair ) and the generator’s distribution (synthesized image pair ). As mentioned in Isola et al.’s work, location information is really important while synthesizing images. So we still use the L1 loss to the objective in our network to force the generator to synthesize images with considering the details of small parts.
The L1 loss is the fundamental loss of our generator which guarantee the image-to-image generation abilities.
Our final objective is
where means the real-fake L1 loss, , and are hyper parameters to balance our training. We use greedy search to optimize the hyper parameters with in our experiments.
To validate our methods, we conduct extensive quantitative and qualitative evaluations on Cityscapes, Facades  and NYU datasets , which contain enough object variations and are widely used for image-to-image translation analysis. We choose the contemporaneous approach (pix2pix) as the baseline , and we also compare our model with CRN , which is also a state of art method on image-to-image generation task. Results by the two compared methods are generated using the code and models released by their authors. Following their method, we use 400 images of Facades dataset for training and 100 for testing.
On the image-to-image tasks, we use some image quality scores (R-MSE, SSIM, MSE and P-NSR) to evaluate the generations of used methods. For these supervised translation method, output images with lower R-MSE and MSE are better while images with higher P-NSR and SSIM higher are better.
Iv-a NYU dataset and Facades dataset
In order to test the generative ability of our model, we use the RGB images from NYU dataset  and get the semantic layouts from Ivaneck’s work . One challenge on this dataset is that sematic labels are quite limited. In this dataset, the semantic layouts only contain five different classes (furniture, structure, prop, floor and boardlines). Note different type of objects can be labeled as the same class (e.g. Beds and bookshelves are both labelled as ”furniture”). Through this way, we got 795 five-category semantic images from the training dataset of raw Nyu dataset, and 700 for training, 95 for testing. In this case, it is difficult for the generator to organize the contents and generate plausible results. We regard it as a challenging task for this one-to-many translation. So we compute the instance map of these objects and concatenate the instance map with the semantic layouts, which can provide our generator more information and force the model to pay attention to the overlapped objects. The Fig. 4 shows one generation example on NYU dataset. We can find that our result is much better than the other two methods. The relationship between books and shelves are well modeled. On the other hand, we evaluated the effectiveness of the perturbed loss . The Fig. 5 shows the comparison results, which illustrate that the perturbed loss can help generate better results.
The Fig. 4 shows the comparing results using the three methods on NYU dataset. The picture generated by pix2pix performs well on generating some simple contents such as floor and the wooden furniture. But pix2pix fail to generate books on the bookshelf. The CRN method fail to generate images with enough details. Although the image-to-image tasks can be considered as a one to many problem, we still measure four kinds of image quality scores on these three methods for reference. The Table. I shows the evaluation results of the three methods. The CRN is better at generating images without structure errors. That’s why CRN has higher SSIM score. Comparing with the pix2pix method, we have higher SSIM and lower MSE error. Our methods have better results than the other two methods.
We evaluate the effectiveness of the perturbed loss on both NYU and Facades datasets. As the Fig. 6 shows, the perturbed loss can actually reduce the blur parts and help the generator create more plausible images with more detail information. On the other hand, we evaluate the performance of denoising process and compare our methods with the pix2pix method in the Fig. 9. The result of pix2pix has some meaningless color blocks as shown in the red boxes. The results in Fig. 9 show that the proper noise can help alleviate such phenomenon. We also evaluate the effectiveness of noise in the NYU dataset in the Fig. 8 and get similar performance of Fig. 9. Proper noisy inputs make the generation process more robust.
We also evaluate the cascade layer loss for our model. The Fig. 10 shows the comparing results on NYU dataset. The generations without cascade layer loss are blurry and the color blocks are dirty and wrong. And we also evaluate this loss in the Facades dataset, we can see the results in the Fig. 11. And we didn’t compute the instance map for the Facades dataset as the result of that this dataset is easy and doesn’t have the complex objects.
Iv-B Cityscapes dataset
The cityscapes dataset contain 20 different kinds of objects and have various scenes. And the overlapped objects such as cars in the semantic layouts are difficult for our generator to synthesize. As described in Fig. 12, although models like pix2pix can synthesize a clear image, such generated images often contain mistakes especially near by the boundary of two different objects (e.g. two neighboring cars). The results generated from CRN are basically correct but lack of the details on small objects and the generated objects are blurry. With the help of cascade layer loss, perturbed loss and the instance map, images generated by our model contain correct and clear objects. We also evaluate the image equality scores on the cityscape dataset. The results in Table. II show that our method performs significantly better than two other methods. And this dataset contains 2975 images and use 2675 for training and others for testing.
The perturbed loss can improve the image generation in some way. The Fig. 13 shows the results we explore the effectiveness of the .We see that using the perturbed loss can generate images with more details and the generation got sharper than cases without using perturbed loss. The Table. III
also shows the evaluation results with the four evaluation metric. The perturbed loss allow the adversarial network to train longer efficiently. That’s why model with perturbed loss lead to lower MSE error and get better generations. Thecan help improve the ability of discriminator to distinguish the fake component from the real sample distribution. So it can generate more plausible results.
And we also consider to check the extra cascade layer loss . So we design the contrast experiments to show the effectiveness of this loss. The Fig. 14 shows the experimental results. As this figure shows, the results using the cascade layer loss could be clearer. This loss could provide hierarchy constrains for the generator. The higher-layer feature maps can carry on interpretation information for this image-to-image translation. From the Table. III, the can help model to generate images with higher SSIM and lower MSE error. This loss can force the generator to generate images with considering small parts as well as matching the representations of fake images and real images in high-level space.
The effect of instance map Ins. is also evaluated in our experiment in Table. III. To further validate our car instance map Ins. can improve the generation results, we show some examples in the Fig. 15. The additional car instance map can provide our generator more information and force the discriminator to concentrate on the car objects. It’s easy to find that inputs with car instance map lead to more plausible results. Results from Table. III also demonstrate this point.
In order to explore the effectiveness of noise input, we evaluate how the denoising process can improve the generation results. The Fig. 16 shows some experimenting results. We find that models trained with appropriate Gaussian noise (e.g. Gaussian noise with deviation ) generate more plausible images. But if the noise is larger (e.g. Gaussian noise with deviation ), the outputs get worse. We can conclude that an appropriate amount of Gaussian noise can improve the image generation process. But when the noise are too large, the results got worse than results without adding noise. The Table. III shows some image equality scores under different condition on the Cityscapes dataset. With a proper noisy input, the denoising process can improve the robustness of our model as well as generating better images.
Considering that we used multiple improvements, we evaluate the some combinations of theses methods on Cityscapes dataset. The Table. III shows the evaluation results. We find that our framework performs well when we mix Gaussian random noise with standard deviation 0.1 rather than 0.04 and 0.4. The image generation results can be further improved when we consider the perturbed loss , the cascade loss and the instance map. We find that we can get better results by hybrid these methods all. We evaluate our final method on the Cityscapes dataset with , , car instance map and the Gaussian noise with a standard deviation 0.1 and achieved the highest P-SNR and SSIM scores as well as the lowest MSE and R-MSE scores.
In this paper, we propose a new architecture based on GAN to achieve image translation between two different domains. We introduce a denoising image-to-image framework to improve the image generation process and increase the robustness of our model. We use the instance map to force the generator to concentrate on the overlapped objects to achieve better generation performance. The perturbed loss and the cascade loss can improve the contents organizing ability of the generator. Experimental results show that our method outperforms other state-of-the-art image-to-image translation methods.
Although our model can achieve plausible image-to-image translation tasks among contemporaneous methods, the gap still exists between the real images and the synthesized ones. Generally, the key of image-to-image translation is how to teach the model comprehend the relationship among various objects. On this point, we use cascade layer loss to control the image structure information in different scales. We also use a denoising process and the perturbed loss to improve the robustness and model the invariability in the sample distribution. But they are still not enough to obtain a realistic synthesized image. It’s still a challenge to achieve a pixel-level translation. We need the discriminator to be smarter to guide the generation process. It requires the model achieve not only the feature matching, but also the global information extracting. We leave them to our future work.
This work was supported by the National Natural Science Foundation of China under Grant no. 61701463, the China Postdoctoral Science Foundation under Grant no. 2017M622277, Natural Science Foundation of Shandong Province of China under Grant no. ZR2017BF011, the Fundamental Research Funds for the Central Universities under Grant nos. 201713019 and the Qingdao Postdoctoral Science Foundation of China. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” arXiv preprint arXiv:1703.10593, 2017.
-  J. Zhao, M. Mathieu, and Y. LeCun, “Energy-based generative adversarial network,” arXiv preprint arXiv:1609.03126, 2016.
-  M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.
-  O. Press, A. Bar, B. Bogin, J. Berant, and L. Wolf, “Language generation with recurrent generative adversarial networks without pre-training,” arXiv preprint arXiv:1706.01399, 2017.
-  A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune, “Plug & play generative networks: Conditional iterative generation of images in latent space,” arXiv preprint arXiv:1612.00005, 2016.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” arXiv preprint arXiv:1611.07004, 2016.
-  Q. Chen and V. Koltun, “Photographic image synthesis with cascaded refinement networks,” arXiv preprint arXiv:1707.09405, 2017.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning.” inAAAI, 2017, pp. 4278–4284.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” CoRR, vol. abs/1512.03385, 2015. [Online]. Available: http://arxiv.org/abs/1512.03385
-  O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” Computer Vision–ECCV 2012, pp. 746–760, 2012.
-  X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” in Advances in Neural Information Processing Systems, 2016, pp. 2172–2180.
-  E. Denton, S. Gross, and R. Fergus, “Semi-supervised learning with context-conditional generative adversarial networks,” arXiv preprint arXiv:1611.06430, 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Vision. Springer, 2016, pp. 630–645.
-  ——, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network
training by reducing internal covariate shift,” in
International Conference on Machine Learning, 2015, pp. 448–456.
-  G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006.
Z. Zhu, P. Luo, X. Wang, and X. Tang, “Multi-view perceptron: a deep model for learning face identity and view representations,” inAdvances in Neural Information Processing Systems, 2014, pp. 217–225.
M. Kan, S. Shan, H. Chang, and X. Chen, “Stacked progressive auto-encoders (spae) for face recognition across poses,” inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1883–1890.
-  L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,” arXiv preprint arXiv:1508.06576, 2015.
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” inEuropean Conference on Computer Vision. Springer, 2016, pp. 694–711.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
-  J. Bruna, P. Sprechmann, and Y. LeCun, “Super-resolution with deep convolutional sufficient statistics,” arXiv preprint arXiv:1511.05666, 2015.
-  Z. Zhang, Y. Song, and H. Qi, “Age progression/regression by conditional adversarial autoencoder,” arXiv preprint arXiv:1702.08423, 2017.
-  X. Wang and A. Gupta, “Generative image modeling using style and structure adversarial networks,” in European Conference on Computer Vision. Springer, 2016, pp. 318–335.
-  P. Sangkloy, J. Lu, C. Fang, F. Yu, and J. Hays, “Scribbler: Controlling deep image synthesis with sketch and color,” arXiv preprint arXiv:1612.00835, 2016.
-  D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2536–2544.
-  A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
-  S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” arXiv preprint arXiv:1605.05396, 2016.
R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do, “Semantic image inpainting with deep generative models,” inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5485–5493.
-  J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros, “Generative visual manipulation on the natural image manifold,” in European Conference on Computer Vision. Springer, 2016, pp. 597–613.
-  M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440, 2015.
-  L. Karacan, Z. Akata, A. Erdem, and E. Erdem, “Learning to generate images of outdoor scenes from attributes and semantic layouts,” arXiv preprint arXiv:1612.00215, 2016.
-  T. Kaneko, K. Hiramatsu, and K. Kashino, “Generative attribute controller with conditional filtered generative adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 6089–6098.
-  H. Dong, S. Yu, C. Wu, and Y. Guo, “Semantic image synthesis via adversarial learning,” arXiv preprint arXiv:1707.06873, 2017.
-  M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017.
-  I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of wasserstein gans,” arXiv preprint arXiv:1704.00028, 2017.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
-  R. Tyleček and R. Šára, “Spatial pattern templates for recognition of objects with regular structure,” in German Conference on Pattern Recognition. Springer, 2013, pp. 364–374.
B. J. IVANECKÝ, “Depth estimation by convolutional neural networks.”