Region and Object based Panoptic Image Synthesis through Conditional GANs

12/14/2019 ∙ by Heng Wang, et al. ∙ The University of Sydney UNSW 0

Image-to-image translation is significant to many computer vision and machine learning tasks such as image synthesis and video synthesis. It has primary applications in the graphics editing and animation industries. With the development of generative adversarial networks, a lot of attention has been drawn to image-to-image translation tasks. In this paper, we propose and investigate a novel task named as panoptic-level image-to-image translation and a naive baseline of solving this task. Panoptic-level image translation extends the current image translation task to two separate objectives of semantic style translation (adjust the style of objects to that of different domains) and instance transfiguration (swap between different types of objects). The proposed task generates an image from a complete and detailed panoptic perspective which can enrich the context of real-world vision synthesis. Our contribution consists of the proposal of a significant task worth investigating and a naive baseline of solving it. The proposed baseline consists of the multiple instances sequential translation and semantic-level translation with domain-invariant content code.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image-to-image translation (IIT) is significant to many computer vision and machine learning tasks such as image and video synthesis [1]. It plays an important role in applications for graphics editing [2] such as Photoshop, and for animation [3]

. In addition, many computer vision problems including image colorization 

[4], outdoor scene editing [5]

, semantic inpainting 

[6], and style transfer [7, 8] are essentially topic-specific image-to-image translation tasks.

IIT can be divided into four different levels: (a) image style-level, (b) semantic-level, (c) instance-level, and (d) panoptic-level, as shown in Fig. 1. The difficulty of IIT increases from (a) to (d). Image style-level translation transfers the overall style of an image to another domain. The simplest example is adjusting the contrast of an image. As shown in Fig. 1(a), the whole image including the semantic attributes (sky & road) and instance attribute (car) have been lightened. The semantic-level translation shown in Fig. 1(b) focuses more on semantic regions such as the sky and the road. The semantic-level IIT treats objects from the same category with the same label. Unlike the image style-level IIT, semantic-level IIT only converts the specified semantic attributes of an image from the source domain to another domain. In other words, the generated image preserves the style for most regions except those with the specified semantic attributes. For those semantic attributes to be translated, after translation, only the style has been changed while the semantic meaning is still preserved. As illustrated in Fig. 1(b), only the style of the sky has been changed while the style for the road and cars remains intact. Additionally, the instance-level IIT replaces the instance in the original image domain with another instance in the target domain. The translation could happen between two instances with different semantic meaning. We provide an example in Fig. 1(c), instances of cars are expected to be transfigured to instances of sheep. In this paper, we format an interesting and challenging task: panoptic-level IIT. The panoptic image set consists of stuff-related semantic attributes such as sky and thing-related instance attributes like person and car. Current image translation tasks only consider image-level translation, i.e., the overall image style transfer, and instance-level translation, i.e., object transfiguration. As shown in Fig. 1(d), our proposed task enables both these two translations. We also take the task a step further to ensure the translation for each panoptic-level attribute is independent, thus generating more creative images.

(a) Image style-level image-to-image translation.
(b) Semantic-level image-to-image translation.
(c) Instance-level image-to-image translation.
(d) Panoptic-level image-to-image translation.
Figure 1: A schematic diagram of of different levels of image-to-image translation and the shade of each arrow indicates the difficulty of each image-to-image translation task.

There are many applications where image-to-image translation (IIT) techniques can play an important role. Artist-level paintings generated by artificial intelligence instead of humans are impressive. Even though the translation results might be imperfect, some parts from the translated images are still useful given that the image style is another kind of art. Photo inpainting can also benefit from IIT techniques. For those valuable photos or paintings in the past, IIT techniques can be used to restore the original appearance of them. Apart from the significance of standard IIT techniques, panoptic-level IIT is also important in the following scenarios. The street scenes in real-time can be rendered to synthetic street scenes effortlessly when applying panoptic-level IIT. For instance, video games such as Grand Theft Auto would be the kind of application that can benefit from this panoptic-level IIT technique. Apart from electronic games, film productions can also benefit from these kinds of techniques. With the assistance of IIT, image editing tasks such as cartoonization can be more efficient. Furthermore, the animation industry could apply IIT techniques on raw materials directly to produce the synthesis effect rather than using time-consuming conservative post-production rendering. In the 2019 film Pokémon Detective Pikachu, there are lots of interactions between the animated character Pokémon Pikachu and real-world objects and environments. Assuming that panoptic-level IIT techniques are available, films like Pokémon Detective Pikachu can be recorded using real-world footage where a person would play the role of Pokémon Pikachu and later, during post-production, the person would be replaced directly by the character Pokémon Pikachu using panoptic IIT techniques. Similarly, documentary series recording footage of animals can be converted into other types of films, such as Pokémon fantasy films. In general, since any attributes could be translated independently, IIT techniques can be applied to the production of any urban fantasy applications.

To achieve the above applications, it is important to investigate panoptic-level IIT techniques. Even though image translation techniques have been developed in the past few years, panoptic-level IIT has not been investigated yet. In order to achieve panoptic-level IIT, one intuitive solution could be to implement instance-level IIT as the first step. However, translating instance-level objects requires datasets with at least instance-level annotations. From a different perspective, instance-level IIT can be decomposed into the combination of successful detection of each individual object, proper translation from the current object to a different object, and fine restoration of missing parts using algorithms such as semantic inpainting [6]. The major issue is that translating the overall style such as color distribution might be easy, but the transfiguration of both position and shape of objects is challenging. For objects such as sheep and horses, they might be less difficult because they have similar structures. However, this is not the case for other objects such as sheep and cars since they are totally different. In terms of the semantic-level IIT, the main goal and challenge is to extract the difference while maintaining the same semantic meaning between different styles and to make the style transfer look natural. For example, the appearance of a road on a sunny day is different to the appearance of the same road on a foggy day. As an integration of these two tasks, panoptic-level IIT is no easier than either instance-level IIT or semantic-level IIT. The completion of panoptic-level IIT requires completing both the instance-level IIT and semantic-level IIT first. After that, ensuring the fusion of instance-level and semantic-level IIT is consistent is another interesting and challenging topic worth investigation.

To solve the proposed panoptic-level IIT task, we design a systematic framework as baseline with modules to tackle translation in things and stuff respectively. To be more specific, the proposed framework includes a thing-related and a stuff-related attribute augmentation module. The transfiguration from cars into sheep is an example of the first thing-related attribute augmentation module. Once passing through the first module, the set of instance translated images are then converted by the stuff-related attribute augmentation module. During this process, style of specified stuff will be replaced with the style of region which has the same semantic meaning from another domain. We perform the evaluation on the COCO [9], CityScapes [10], and SYNTHIA [11] datasets. The COCO dataset was originally annotated for object detection, segmentation, and captioning. It holds different categories of things objects. The CityScapes and the SYNTHIA dataset provide real and synthetic images respectively with ground truth semantic segmentation. The evaluation results indicate that our proposed baseline achieves desired synthesized results for this novel task. In a nutshell, the major contributions of this paper can be concluded as:

  • We define and formulate a new task named as panoptic-level image-to-image translation. We hope the formulation of the panoptic-level task can inspire other researchers to investigate this problem in different ways and outperform our baseline.

  • We propose a simple but intuitive baseline of solving this task consisting of thing-related attribute augmentation and stuff-related attribute augmentation.

  • We demonstrate the performance of the proposed baseline using the COCO, SYNTHIA, and CityScapes datasets.

2 Related Work

2.1 Image Style-Level Image-to-Image Translation

Pix2Pix [12] is the first network to tackle the IIT with conditional GAN and L1 loss. However, Pix2Pix is only capable of generating low-resolution translated images. Pix2PixHD [13]

improves Pix2Pix by introducing a multiscale discriminator, a robust loss function, and a more powerful generator. Nevertheless, previous work only focuses on the paired IIT. CycleGAN 

[14] solves the unpaired IIT by introducing the cycle-consistency loss. Similarly, the cycle-consistency loss is also applied in DualGAN [15]. Network architectures for these two generators are identical. The generator is U-shaped and there are skip connections between the upsampling part and the downsampling part. To better learn the unsupervised IIT, UNIT [16] makes a shared latent space assumption. It assumes that two different image domains have a common shared latent space. Given the shared latent-space code, images belonging to any domain can be recovered from it. The shared latent space assumption also implies cycle-consistency constraint proposed in previous methods. Unlike the assumption in UNIT, the MUNIT [17] assumes that latent space between different domains is partially shared. Before the MUNIT, IIT is one-to-one. To be more specific, the translation results are not diverse enough. The MUNIT solves the multimodal problem with within-domain reconstruction and cross-domain translation. In addition, IIT can also be tackled with disentangled representation [18].

Figure 2: The overall view of the proposed panoptic-level image-to-image translation. refers to the thing-related attributes from source domain . refers to the target domain containing the corresponding thing attributes for to translate. TRA and SRA stand for thing-related augmentation as described in Section 3.3 and stuff-related augmentation as described in Section 3.4. refers to the intermediate translation domain after TRA. refers to the target domain for to conduct semantic-attributes augmentation on the source attributes from source domain .
Figure 3: Training for thing-related attributes augmentation. refers to the input image whose thing-related attributes will be translated. and represent generators for the image and the thing-related attributes respectively. and refer to the generated image and the translated thing-related attributes respectively. represents the discriminator for image domain .
Figure 4: Training for stuff-related attributes augmentation. refers to the input image whose stuff-related attributes will be translated. and represent the content encoder and style encoder from domain respectively. and represent the content encoder and style encoder from domain respectively. and represent the generator for image domain and respectively. represents the discriminator for image domain . and refer to the content code produced by and . refers to the style code produced by . represents the random style sampled from domain while refers to the style code produced by .

2.2 Instance-Level Image-to-Image Translation

The instance-level IIT is more challenging than the image-style IIT. The image style-level IIT is more about preserving the image content and adjusting the style into a new image domain. The majority part of style translation is to adjust the image contrast. However, the instance-level IIT sometimes requires to the generation of parts in the target domain that do not exist in the source image. The instance-level image translation is a new research topic proposed by InstaGAN [19]. The InstaGAN applies the sequential mini-batch training mechanism. It reduces the memory constrains of the GPU when there are multiple instances. In terms of training loss of InstaGAN, there are multiple losses to improve the training including GAN loss for the domain loss, cycle-consistency loss, identity mapping loss, and context preseving loss. Furthermore, another work  [20] defines the style code consisting of three parts: object style, background style, and global style. This work is based on the MUNIT framework [17], but improves MUNIT with instance-level GAN loss and instance-level reconstruction loss.

2.3 Panoptic-Level Image-to-Image Translation

To the best of our knowledge, we are the first to formulate the definition of the panoptic-level IIT task and propose a baseline aiming to inspire further research on this specific task. The panoptic-level IIT is inspired by the recent panoptic segmentation research work [21]. In order to help readers gain a better understanding of the panoptic-level IIT, the methods of solving panoptic segmentation are reviewed here. The Cell R-CNN [22] attempts to solve this problem by unifying the semantic segmentation framework and instance segmentation framework [23]. The semantic branch of Cell R-CNN applies global convolution network (GCN) and the instance branch modifies the existing Mask R-CNN framework. The semantic branch and the instance branch share the same backbone. Recently, there have been many research works focusing on fusion of semantic segmentation and instance segmentation using the object-level and instance-level attentions [24]. UPSNet [25]

integrates the semantic segmentation with instance segmentation by the panoptic head. The panoptic head is designed to merge the information from both semantic logits and mask logits. Instead of using original convolution, the semantic head of UPSNet applies the deformable convolution.

3 Methods

3.1 Task Format

Given an image domain , we translate to another image domain by only augmenting its panoptic attributes to a set of different attributes originating from different image domains . Formally, the translated image domain can be formulated as:

(1)

where refers to the set of target attributes from different domains . Then represents the generated image domain with original attributes being replaced with .

In detail, the set of panoptic attributes represents source elements to be translated in : = . These source elements are categorized based on panoptic segmentation of the given input image . That is, a selected source element can either be a thing (instance) or stuff (semantic only). On the other side, corresponds M target panoptic elements specified from different image domains: = . refers to any image domains thus does not necessarily origin from the same image domain as for any , . Then our proposed panoptic-level IIT task can be simply defined as:

(2)

As mentioned before, the panoptic attributes can be divided into two groups: 1) stuff-related semantic attribute set and 2) thing-related instance attribute set. Different translation rules should be applied on different panoptic attribute sets based on their properties. The translation rule regarding the two attribute sets are defined as follows:

  • If the source element is considered as a semantic set, i.e., instances are ignored, the semantic content of should be preserved while only translating its style to of a target domain . That is, we require to have the same semantic content as .

  • If instances are considered when translating , i.e., is countable, can be instance attributes of different semantic meaning. Intuitively, the target element can have a different shape from the source element under this circumstance. That is, in addition to transferring style, we also aim to transfigure the shapes of instance objects.

3.2 Proposed Baseline

When and

combines both thing-related and stuff-related attributes, a simple system for the panoptic-level IIT task is to make reasonable heuristic to the output

for each .

In detail, we group the source elements as stuff and things. We train thing-related attributes augmentation (TRA) models if there are thing-related source elements. We will discuss TRA model in Section 3.3. The first TRA model will be trained on the original data . Before training the next TRA model for , we translate current updated image using current learnt -th model and then pass the generated image as input data to next TRA model. The final translated image after translations is .

For stuff-related attributes , if the corresponding are from different image domains, we will train different stuff-related attributes augmentation (SRA) models to learn all A image style-level mapping . SRA model will be elaborated in Section 3.4. Unlike the sequential order when training TRA models, the training for SRA models can be conducted simultaneously with the same input .

To produce the final translated image, we apply the translated results of SRA models to which can be reformulated as:

(3)

where represents the pixel location. refers to the -th binary segmentation for the stuff-related attribute and represents the -th translated image from domain to domain using the corresponding SRA model. means to repeat the equation by times. The overall workflow is presented in Fig. 2. We firstly translate all the things-related-attributes using a set of TRA models. Then the partially translated image will be passed to the SRA models to get new semantic style for stuff-related attributes. This reasonable heuristic to the output of these two sets of attribute augmentation modules is simple but efficient as a baseline.

3.3 Thing-related Attribute Augmentation

If the source scene element is countable, we can either translate to of the same semantic meaning following semantic style transfer as described in Section 3.4 or transfigure to another object which has different semantic content from another image domain . It should be noted that could be the same as .

Assume that the target object appears in domain and is 1. We deploy InstaGAN [19] as our TRA model to learn the mapping . The InstaGAN model is trained from unpaired data and . The thing-related set of attributes is fed into the model sequentially along with the training data to incorporate the transfiguration information using the sequential mini-batch translation mechanism proposed in InstaGAN. Being consistent with InstaGAN, the thing-related attributes are also defined by the instance annotation masks. Hence, is a set of the instance segmentation of the thing-related attributes. As shown in Fig. 3, the source image and its

are encoded respectively to feature vectors. We aim to translate each instance object to another instance object correspondingly. Thus, the encoded source image and the summation of encoded instance attributes will be incorporated together into the individual instance attribute to generate the new instance attribute

in domain . Taking the feature vector of instance attributes into account, the generated pays more attention to the instance objects. A discriminator is then deployed to tell whether the generated result is real enough to be considered as an image from domain . Again, instance attributes are incorporated as additional information to the input of . The process for is similarly defined as Fig. 3. Finally, the generated image is expected to preserve the information of the source image except for the target scene element specified by the thing-related attributes.

3.4 Stuff-related Attribute Augmentation

Many representations can be deployed to express the attributes. Intuitively, we use binary segmentation mask to identify the stuff-related semantic attributes. Suppose is 1. Given that the selected source element is uncountable, we require that the target source element should be of the same semantic meaning as but from another image domain . We firstly learn the image style-level mapping between and by training the unsupervised translation model MUNIT [17].

We also make the partially shared latent space assumption proposed in MUNIT to encourage non-deterministic mapping between and . Sharing the same content space, the distribution and

can be estimated using their style respectively. Consistent with MUNIT, we also assume the style code is sampled from the prior distribution

. Then, given a content code of and a random style code , the generator for domain is trained to generate a synthesis image from domain . A discriminator is required to guide the generative adversarial process.

Following the partially shared latent space assumption proposed in MUNIT, the conditional probabilities

and of unpaired data and are learnt by the adversarial loss and the joint reconstruction loss is defined as:

(4)

where , , and are hyper parameters to control the weights for each part of the loss. The two adversarial losses and are used to encourage the generated images to be indistinguishable from the respective target image domains. To learn the pair of encoder and decoder/generator, reconstruction loss is proposed in MUNIT. It includes the reconstruction of image , content code , and style code . The reconstruction for , , and as well as the process of are defined similarly with the encoder , generator , and discriminator . The training process of is illustrated in Fig. 4. Encoder is used to project the input image to style and content latent space respectively. Then given a content code and a style code , the generator will generate a translated image which will be further examined by the discriminator .

After training, we put the source image and random style code into the trained generator to generate the image from the target domain . We then incorporate the stuff-related attributes to to get the panoptic translation for . Note that the shape of is the same as while the intensity is augmented indicating the style transfer from to . Finally, the panoptic-level translated image is defined as:

(5)

where represents the pixel location.

Figure 5: Panoptic-level image-to-image translation: (a) segmentation ground truth of stuff-related attributes for images from input domain ; (b) image from input ; (c) from car to sheep using the mapping ; (d-f) the panoptic-level image-to-image translation results with different styles of sky region generated by the proposed baseline.

4 Experiment

4.1 Datasets and Implementation Details

In this experiment, we translate two panoptic attributes from one domain to another two domains respectively to demonstrate our proposed baseline. These two panoptic attributes consist of one stuff-related attribute sky and one thing-related attribute car. Our baseline consists of a TRA model and a SRA model. The training for these two models are specified separately as follows.

The semantic style transfer translation is from a real sky scene to a synthetic one. We train such a translation using the real-world street scene dataset Cityscapes [10] and the synthetic image dataset SYNTHIA [11]. Particularly, we use the SYNTHIA-RAND-CITYSCAPES subset which corresponds with the Cityscapes street scene dataset. We use all the 2975 images from the training set of Cityscapes as training images for one domain and 6196 random images from SYNTHIA-RAND-CITYSCAPES as training images for the other domain. The size of the Cityscapes images is (height weight) while that of the SYHTHIA images is . We resize images from both datasets to during the training of the SRA model for the translation

. This training process lasts 10800 iterations. We set hyperparameters

, , and as 10, 1, and 1 respectively.

To demonstrate the thing-related attributes augmentation, we decide to use images with cars and images with sheep from the dataset MS COCO [9] since the two objects car and sheep vary in both shape and style. To train the TRA model, we use all the images containing the specified objects from 118287 COCO training images. That is, we use 10775 images as the car domain training set and 1516 images as the sheep domain training set. All images are resized to

for efficient training. The training process contains 45 epochs.

Both of these two models were trained from scratch using the optimizer Adam [26] with batch size 1. We configure the training process on one GPU. We use 468 images with cars and 64 image with sheep from the validation set of MS COCO as testing images of our overall system. The stuff-related attribute sky is translated by 10 random styles learnt from the synthetic dataset while the thing-related attribute car is translated to sheep by applying the learnt generator and encoder.

4.2 Results and Discussion

We present some of our panoptic-level translation results in Fig. 5. It consists of 6 images from 468 testing images with cars. The attribute sky is indicated by the corresponding segmentation mask shown in column (a). Column (b) presents the original images from MS COCO validation images with cars. The results after thing-related attribute augmentation are displayed in column (c). We present 3 out of 10 random style translation results for the stuff-related augmentation in the last 3 columns (d), (e), and (f).

As shown in column (b) and column (c), cars in the original images have been translated into sheep successfully. It is interesting to see that during the translation, feet of sheep grow where the tires of cars exist. The car to sheep translation should happen for every car instances in the source image. In the illustration, the number of cars in the image varies from 1 (the third row) to 9 (the last row). As clearly shown in the first image, each car has been translated to a sheep accordingly. However, under some circumstances, the translation might only be partially successful. Compared to the first image, in the fourth image, only the car in the front has been successfully translated into a sheep. The translation for cars near the back is blurred. The reason might be that their positions are far and the objects of interest turn out to be small-scale compared to the size of cars in the first image. It is challenging to preserve the outline of small-scale objects during IIT.

Apart from the successful translation between two objects, the surrounding environment should be kept intact. As shown in the first image, the overall image has been preserved except for the car, for example, the white road lines still exist. However, there are minor artifacts on the surrounding environment. When we look at images from columns (b) and (c) carefully, even though cars have been successfully translated, the overall image is also affected. In particular, the area that near the objects of interest has suffered the most. For example, the road tends to be green. It is especially prominent in the last two images. This is due to the fact that sheep typically appears with grass. Hence, more constraints should be applied on keeping the neighbouring content intact.

Since we use a random subset of SYNTHIA dataset, the training images are not consistent with seasons, weather, or illumination conditions. Therefore, there is no specific pattern in the translated animation-like style from column (d) to column (f). Nevertheless, our current focus is that we can translate the style of the stuff-related attributes. The visual effect of stuff-related attribute augmentation is inspiring. This is the first basic step.

5 Conclusion

In this paper, we propose a novel task named panoptic-level image-to-image translation to translate any combination of a set of specific attributes in an image to any other image domains. Instead of changing the entire style to another domain like most current approaches, our proposed task expects panoptic-level translation which means the way each attribute is translated could be different and independent. The corresponding rules for translating different types of attributes are defined to make the task feasible and meaningful. For uncountable stuff attributes, it is expected to only translate the style while preserving the semantic meaning given that the shape of stuff attributes is consistent among different image domains. The semantic meaning of a stuff attribute is unchanged. For countable thing attributes, we aim to transfigure the current object to another object with both style and shape being changed. Simple and efficient, our proposed system makes a heuristic combination of the outputs from two well-performing networks. Evaluated on several common datasets, our proposed framework achieves panoptic-level image-to-image translation in a consistent manner.

References

  • [1] Aayush Bansal, Shugao Ma, Deva Ramanan, and Yaser Sheikh. Recycle-GAN: Unsupervised Video Retargeting. In Proceedings of the European Conference on Computer Vision (ECCV), pages 119–135, 2018.
  • [2] David Bau, Jun-Yan Zhu, Hendrik Strobelt, Zhou Bolei, Joshua B. Tenenbaum, William T. Freeman, and Antonio Torralba. GAN Dissection: Visualizing and Understanding Generative Adversarial Networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.
  • [3] Yang Chen, Yu-Kun Lai, and Yong-Jin Liu. Cartoongan: Generative Adversarial Networks for Photo Cartoonization. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 9465–9474, 2018.
  • [4] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful Image Colorization. In European Conference on Computer Vision (ECCV), pages 649–666. Springer, 2016.
  • [5] Pierre-Yves Laffont, Zhile Ren, Xiaofeng Tao, Chao Qian, and James Hays. Transient Attributes for High-Level Understanding and Editing of Outdoor Scenes. ACM Transactions on Graphics (TOG), 33(4):149, 2014.
  • [6] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context Encoders: Feature Learning by Inpainting. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pages 2536–2544, 2016.
  • [7] Leon A Gatys, Alexander S Ecker, and Matthias Bethge.

    Image Style Transfer Using Convolutional Neural Networks.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2414–2423, 2016.
  • [8] Justin Johnson, Alexandre Alahi, and Li Fei-Fei.

    Perceptual Losses for Real-time Style Transfer and Super-Resolution.

    In European Conference on Computer Vision (ECCV), pages 694–711. Springer, 2016.
  • [9] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision (ECCV), pages 740–755. Springer, 2014.
  • [10] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.

    The Cityscapes Dataset for Semantic Urban Scene Understanding.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3213–3223, 2016.
  • [11] German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. The Synthia Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3234–3243, 2016.
  • [12] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros.

    Image-to-Image Translation with Conditional Adversarial Networks.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1125–1134, 2017.
  • [13] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 8798–8807, 2018.
  • [14] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2223–2232, 2017.
  • [15] Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. DualGAN: Unsupervised Dual Learning for Image-to-Image Translation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2849–2857, 2017.
  • [16] Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised Image-to-Image Translation Networks. In Advances in Neural Information Processing Systems (NIPS), pages 700–708, 2017.
  • [17] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal Unsupervised Image-to-Image Translation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 172–189, 2018.
  • [18] Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Diverse Image-to-Image Translation via Disentangled Representations. In Proceedings of the European Conference on Computer Vision (ECCV), pages 35–51, 2018.
  • [19] Sangwoo Mo, Minsu Cho, and Jinwoo Shin. InstaGAN: Instance-aware Image-to-Image Translation. In International Conference on Learning Representations (ICLR), 2019.
  • [20] Zhiqiang Shen, Mingyang Huang, Jianping Shi, Xiangyang Xue, and Thomas S Huang. Towards Instance-Level Image-to-Image Translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3683–3692, 2019.
  • [21] Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, and Piotr Dollár. Panoptic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 9404–9413, 2019.
  • [22] Donghao Zhang, Yang Song, Dongnan Liu, Haozhe Jia, Siqi Liu, Yong Xia, Heng Huang, and Weidong Cai. Panoptic Segmentation with an End-to-End Cell R-CNN for Pathology Image Analysis. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pages 237–244. Springer, 2018.
  • [23] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2980–2988, 2017.
  • [24] Yanwei Li, Xinze Chen, Zheng Zhu, Lingxi Xie, Guan Huang, Dalong Du, and Xingang Wang. Attention-Guided Unified Network for Panoptic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7026–7035, 2019.
  • [25] Yuwen Xiong, Renjie Liao, Hengshuang Zhao, Rui Hu, Min Bai, Ersin Yumer, and Raquel Urtasun. UPSNet: A Unified Panoptic Segmentation Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 8818–8826, 2019.
  • [26] Diederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. Proceedings of the International Conference on Learning Representations (ICLR), 2014.