Generating Instance Segmentation Annotation by Geometry-guided GAN

01/26/2018 ∙ by Wenqiang Xu, et al. ∙ Shanghai Jiao Tong University 0

Instance segmentation is a problem of significance in computer vision. However, preparing annotated data for this task is extremely time-consuming and costly. By combining the advantages of 3D scanning, physical reasoning, and GAN techniques, we introduce a novel pipeline named Geometry-guided GAN (GeoGAN) to obtain large quantities of training samples with minor annotation. Our pipeline is well-suited to most indoor and some outdoor scenarios. To evaluate our performance, we build a new Instance-60K dataset, with various of common objects categories. Extensive experiments show that our pipeline can achieve decent instance segmentation performance given very low human annotation cost.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

page 6

page 8

page 9

page 12

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Instance segmentation [1, 2] is one of the fundamental problems in computer vision, which provides many more details in comparison to object detection [3], or semantic segmentation [4]

. With the development of deep learning, significant progress has been made in instance segmentation. Many annotated datasets of large quantity were proposed

[5, 6]. However, in practice, when meeting a new environment with many new objects, large-scale training data collection and annotation is inevitable, which is cost-prohibitive and time-consuming.

Researchers have longed for a means of generating numerous training samples with minor effort. Computer graphics simulation is a promising way, since a 3D scene can be a source of unlimited photorealistic images paired with ground truths. Besides, modern simulation techniques are capable of synthesizing most indoor and outdoor scenes with perceptual plausibility. Nevertheless, these two advantages are double-edged, rendered images would be painstaking to make the simulated scene visually realistic [7, 8, 9]. Moreover, for new environment, it is very likely some of the objects in reality are not in the 3D model database.

Figure 1: Compared with human labeling (red), our pipeline (blue) can significantly reduce human labor cost by nearly 2000 folds and achieve reasonable accuracy in instance segmentation. 77.02 and 86.02 are average mAP@0.5 of 3 scenes.

We present a new pipeline that attempts to address these challenges. Our pipeline comprises three stages: scanning, physics reasoning, domain adaptation (SRDA) as shown in Fig. 1

. At the first stage, new objects and environmental background from a certain scene are scanned into 3D models. Unlike other CG based methods that do simulation with existing model datasets, images synthesized by our pipeline can ensure realistic effect and well describe the targeting environment, since we use real-world scanned data. At the reasoning stage, we proposed a reasoning system to generate proper layout for each scene by fully considering physically and commonsense plausible. Physics engine is used to ensure physically plausible and commonsense plausible is checked by commonsense likelihood (CL) function. For example, “a mouse on the mouse pad and they on the table” would have a large output in CL function. In the last stage, we proposed a novel

Geometry-guided GAN (GeoGAN) framework. It integrates geometry information (segmentation as edge cue, surface normal, depth) which helps to generate more plausible images. In addition, it includes a new component Predictor which can serve as a useful auxiliary supervision, and also a criterion to score the visual quality of images.

The major advantage of our pipeline is time-saving. Compared with conventional exhausting annotation, we can reduce labor cost by nearly 2000 folds, in the meantime, achieve decent accuracy, preserving 90% performance. (See Fig. 1). The most time-consuming stage is scanning, which is easy to accomplish in most of indoor and some of outdoor scenarios.

Our pipeline can be widely adaptive to many scenarios. We choose three representative scenes, namely a shelf from a supermarket (for a self-service supermarket), a desk from an office (for home robot), a tote similar in Amazon Robotic Challenge111https://www.amazonrobotics.com/#/roboticschallenge.

To the best of our knowledge, no current datasets consist of compact 3D object/scene models and real scene images with instance segmentation annotations. Hence, we build a dataset to prove the efficacy of our pipeline. This dataset have two parts, one for scanned object models (SOM dataset) and one for real scene images with instance level annotations (Instance-60K).

Our contributions have two folds:

  • The main contribution is the novel three-stage SRDA pipeline. We added a reasoning system to the feasible layout building and proposed a new domain adaptation framework named GeoGAN. It is time-saving and the output images are close to real ones according to the evaluation experiment.

  • To demonstrate the effectiveness, we build up a database which contains 3D models of common objects and corresponding scenes (SOM dataset) and scene images with instance level annotations (instance-60K).

We will first review some of the related concepts and works in Sec. 2 and depict the whole pipeline from Sec. 3 on. We describe the scanning process in Sec. 3, reasoning system in Sec. 4, and GAN-based domain adaptation in Sec. 5. In Sec. 6, we illustrate how Instance-60K dataset is built. Extensive evaluation experiments are carried out in Sec. 7. And finally, we discuss the limitation of our pipeline in Sec. 8.

2 Related Works

Instance Segmentation Instance segmentation has become a hot topic in recent years. Dai et al. [1] proposed a complex multiple-stage cascaded network that does detection, segmentation, and classification in sequence. Li et al. [2] combined a segment proposal system and object detection system, simultaneously producing object classes, bounding boxes, and masks. Mask R-CNN [10]

supports multiple tasks including instance segmentation, object detection, human pose estimation. Whereas exhausting labeling is required to guarantee a satisfactory performance, if we apply these methods to a new environment.

Generative Adversarial Networks Since introduced by Goodfellow [11], GAN-based methods have fruitful results in various fields, such as image generation [12]

, image-to-image translation

[13], 3D model generation [14], etc. The former paper on image-to-image translation inspired our work, it indicates GAN has the potential to bridge the gap between simulation domain and real domain.

Image-to-Image Translation A general image-to-image translation framework was first introduced by Pix2Pix [15], but it required a great amount of paired data. Chen [16] proposed a cascaded refinement network free of adversarial training, which gets high-resolution results, but still demands paired data. Taigman et al. [17] proposed an unsupervised approach to learn cross-domain conversion, however it needs a pre-trained function to map samples from two domains into an intermediate representation. Dual learning [13, 18, 19] is soon imported for unpaired image translation, but currently, dual learning methods encounter setbacks when camera viewpoint or object position varies. On the contrary to CycleGAN, Benaim et al. [20] learned one-side mapping. Refining rendered image using GAN is also not unknown [21, 22, 23]. Our work is a complementary to these approaches, where we deal with more complex data and tasks. We will compare [22, 23] with our GeoGAN in Sec. 7.

Synthetic Data for Training Some researchers attempt to generate synthetic data for vision tasks such as viewpoint estimation [24], object detection [25], semantic segmentation [26]. In [27], Alhaija et al. addressed generation of instance segmentation training data for street scenes with technical effort in producing realistically rendered and positioned cars. However, they focus on street scenes and do not use an adversarial formulation.

Scene Generation by Computer Graphics Scene generation by CG techniques is a well-studied area in the computer graphics community [28, 29, 30, 31, 32]. These methods are capable of generating plausible layout of indoor or outdoor scene, but they have no intention to transfer the rendered images to real domain.

3 Scanning Process

In this section, we describe the scanning process. Objects and scene backgrounds are scanned in two ways due to the scale issue.

We choose the multi-view environment (MVE) [33] to perform dense reconstruction for objects, since it is image-based and thus requires only a RGB sensor. Objects are first videotaped, which can be easily done by most RGB sensors. In the experiment, we use an iPhone5s. The videos are sliced into images with multiple viewpoints, and fed into MVE to generate 3D models. We can videotape multiple objects (at least 4) and generate corresponding models per time, which can alleviate the scalability issue when new objects are too many to scan one by one. MVE is capable of generating dense meshes with a fine texture. As for the texture-less objects, we scan the object with hand holding, and the hand-object interaction can be a useful cue for reconstruction, as indicated in [34].

For the environmental background, scenes without targeting objects were scanned by Intel RealSense R200 and reconstructed by ReconstructMe222http://reconstructme.net/. We follow the official instruction to operate reconstruction.

Resolution for iPhone5s is 19201080 and for R200 is 640480 at 60 FPS. Remaining settings are by default.

Figure 2: Representative environmental backgrounds, object models, and corresponding label information.

4 Layout Building With Reasoning

Figure 3: The scanned objects (a) and background (b) are put into a rule-based reasoning system (c) to generate physically plausible layouts. The upper of (c) is the random scheme, while the bottom is the rule-based scheme. In the end, system output rough RGB images and corresponding annotations (d).

4.1 Scene Layout Building With Knowledge

With 3D models of objects and environmental background at hand, we are ready to generate scenes by our reasoning system. A proper scene layout must obey physics laws and human conventions. To make scene physically plausible, we select an off-the-shelf physics engine, Project Chrono [35]. However, it is not as direct to make object layout convincing, some commonsense knowledge should be incorporated. To produce a feasible layout, we need to make object pose and location reasonable. For example, a cup always has the pose of “standing up”, but not “lying down”, meanwhile, it is always on the table not the ground. This prior falls in common daily knowledge that can not be achieved by physics reasoning. Therefore, we present how to annotate the pose and location prior in what follows.

Pose Prior

: For each object, we show annotators its 3D model in 3D graphics environment, and ask annotators to draw all its possible poses that she/he can imagine. For each possible pose, the annotator should suggest a probability that this pose would happen. We record the the probability of

object in pose as

. We use interpolation to ensure most of pose has correponding probability value.

Location Prior: The same as pose prior, we show annotators the environmental background in 3D graphics environment, thus annotators label all its possible locations that an object may be placed. For each possible location, the annotator should suggest a probability that this object would be placed. We denoted the probability of object in location as . We use interpolation to make most of location has correponding probability value.

Relationship Prior: Some objects have strong co-occurrence prior. For example, mouse is always close to laptop. Given an object name list, we use language prior to select a set of object pair that have high co-occurrence probability, we call them as occurrence object pair (OOP). For each OOP, annotator suggests a probability of occurrence of corresponding object pairs. For object and , their probability of occurrence is denoted as and a suggested distance (by annotators) is .

Note that the annotation maybe subjective, but we found that we only need a prior for layout generation guidance. Extensive experiments show that roughly subjective labeling is sufficient for producing satisfactory results. We will report the experiment details in supplementary file.

4.2 Layout Generation by Knowledge

We generate layout by considering both physics laws and human conventions. First, we randomly generate a layout and check its physically plausible by Chrono. If it is not physically reasonable, we reject this layout. Second, we check its commonsense plausible by three priors above. In detail, all object pairs are extracted in layout scene. We denote , and as category, pose and 3D location of extracted object pair in scene layout. The likelihood of pose is expressed as

(1)

The likelihood of location for object pair is written as,

(2)

The likelihood of occurrence for object pair is presented as

(3)

where is a Gaussian function with parameter ( in our paper). We compute occurrence prior in the case where the probability is larger than a threshold ( in our paper).

We denote commonsense likelihood function of a scene layout as

(4)

Thus, we can judge commonsense plausible by . If is smaller than a threshold, we reject its corresponding layout. In this way, we can generate large quantities of layouts that is both physics and commonsense plausible.

4.3 Annotation Cost

We annotate scanned model one by one. So, the annotation cost is linear scale with respect to scanned object model number . Note that only a small set of object have strong object occurrence assumption (e.g. laptop and mouse). So, the complexity of object occurrence annotation is close to . We carry out experiment to find that 10 seconds is taken to label knowledge for a scanned object model in average, which is minor (one hour for hundreds of objects).

5 Domain Adaptation With Geometry-guided GAN

Now, we have collection of the rough (RGB) image and its corresponding ground truths, instance segmentation , surface normal , depth image . Besides, the real image captured from targeting environment is denoted as . are the sample sizes for rendered samples and real samples. With these data, we can embark on training GeoGAN.

Figure 4: The GDP structure consists of three components: a generator (G), a discriminator (D), and a predictor (P), along with four loss: LSGAN loss (GAN loss), Structure loss, Reconstruction loss (L1 loss), Geometry-guided loss (Geo loss).

5.1 Objective Function

GeoGAN has a “GDP” structure, as sketched in Fig. 4

, which comprises a generator (G), a discriminator (D) and a predictor (P) which serves as a geometry prior guidance. Such structure leads to the design of the objective function, which consists of four loss functions that will be presented in what follows.

LSGAN Loss We adopt a least-square generative adversarial objective (LSGAN)[36] to help G and D training stable. The LSGAN adversarial loss can be written as

(5)

and stand for a sample from the rough image and the real image domain respectively.

We denote the output of the generator with parameter for rough image as , i.e.

Structure Loss A structure loss is introduced to ensure maintains the original structure of . A Pairwise Mean Square Error (PMSE) loss is imported from [37], expressed as:

(6)

Reconstruction Loss To ensure the geometry information successfully encoded in the network. We also use as a reconstruction loss for the geometric images.

(7)

Geometry-guided Loss Given an excellent geometer predictor, a high-quality image should be able to produce desirable instance segmentation, depth map and normal map. It is a useful criterion that judges whether is qualified or not. An unqualified image (with artifacts, distorted structure) will induce large geometry-guided loss (Geo Loss).

To achieve this goal, we pretrained the predictor with following formula:

(8)

It means given an input image , with the parameter , the predictor can output instance segmentation , normal map and depth map respectively. In the first few iterations, the predictor is pretrained with the rough image, that is, . When the generator starts to produce reasonable results, can be updated with . And then, the predictor is ready to supervise the generator, and will be updated as follow:

(9)

In this equation, is not updated, and it is a loss.

Overall Objective Function In sum, our objective function can be expressed as:

(10)

This objective function reveals the iterative nature of our GeoGAN framework, as sketched in Fig. 5.

Figure 5:

Iterative optimization framework. As the epoch goes, G, D and P are updated as presented. While one component is updating, the other two are fixed.

5.2 Implementation

We will dive into details of how to implement and train our model.

Dual Path Generator (G) Our generator has dual forward data paths (color path and geometry path), which help to integrate the color and geometry information. For color path, input rough image will firstly pass three convolutional layers, and then downsample to and pass 6 resnet blocks [38]. After that, output feature maps are upsampled to with bilinear upsampling. During upsampling, color information path will concatenate feature maps from geometry information path.

Geometry information are firstly convolutioned to feature maps and concatenated together, resulting in a 3-dimensional feature map before passing to geometry path described below. After the last layer, we split the output of the last layer into three parts, and produce three reconstruction images for three kinds of geometric images.

Let denote

-Convolution-InstanceNorm-ReLU layer with 64 filters and stride 1.

denotes a residual block that contains two convolutional layers with the same number of filters on both. denotes a bilinear upsampling layer followed with a Convolution-InstanceNorm-ReLU layer with k filters and stride 1.

The generator architecture is:

color path: 7n3s1ReLU-3n64s2ReLU-3n128s2ReLU-R256-R256-R256-R256
-R256-R256-up512-up256

geometry path: 7n3s1ReLU-3n64s2ReLU-3n128s2ReLU-R256-R256-R256
-R256-R256-R256-up256-up128

Markovian Discriminator (D) The discriminator is a typical PatchGAN or Markovian discriminator described in [39, 40, 15]. We also found 7070 is a proper receptive field size, hence the architecture is exactly like [15].

Geometry Predictor (P) FCN-like networks[4] or UNet[41] are good candidates for the geometry predictor. In implementation, we choose a UNet architecture. denotes a Convolution-InstanceNorm-LeakyReLU layer with k filters and stride 2, the slope of leaky ReLU is 0.2. denotes a bilinear upsampling layer followed with a Convolution-InstanceNorm-ReLU layer with k filters and stride 1. in is 2 times larger than that in , since a skip connection between corresponding layers. After the last layer, feature maps are split into three parts and convolution to a three dimension layer separately, activated by tanh function.

The predictor architecture is: down64-down128-down256-down512-down512-down512-up1024-up1024-up1024-up512-up256-up128

Training Details Adam optimizer[42] is used for all three “GDP” components, with batch size of 1. G, D and P are trained from scratch. We firstly trained geometry predictor with 5 epochs to get a good initialization, then began the iterative procedures. In the iterative procedures, learning rate for the first 100 epochs are 0.0002 and linearly decay to zero in the next 100 epochs. All training images are of size .

All models are trained with , , , in Eq. 10. The generator is trained twice before the discriminator updates once.

6 Instance-60K Building Process

As we found no existing Instance segmentation datasets [5, 6, 43] can benchmark our task, we have to build a new dataset to benchmark our method.

Instance-60K is an ongoing effort to annotate instance segmentation for scenes can be scanned. Currently it contains three representative scenes, namely supermarket shelf, office desk and tote. These three scenes are chosen since they potentially benefit real-world applications in the future. Supermarket cases are well-suited to self-service supermarkets like Amazon Go333https://www.amazon.com/b?node=16008589011. Home robots will always meet the scene of an office desk. The tote is in the same setting as Amazon Robotic Challenge.

Figure 6: Representative images and manual annotations in the Instance-60K dataset.

To note that our pipeline does not restrict to these three scenes, technically any scenes can be simulated are suitable to our pipeline.

Shelf scene has objects of 30 categories, which items such as soft drinks, biscuits, and tissues. 15 categories for desk scene and tote scene. All are common objects in the corresponding scenes. Objects and scenes are scanned for building SOM dataset as described in section 3.

For instance-60K dataset, these objects are placed in corresponding scenes and then videotaped by iPhone5s under various viewpoints. We arranged 10 layouts for the shelf, and over 100 layouts for desk and tote. Videos are then sliced into 6000 images in total, 2000 for each scene. The number of labeled instance is 60894, that is the reason why we call it instance-60K. We have average 966 instances per category. This scale is about three times larger than PASCAL VOC [43] level (346 instances per category), so it is qualified to benchmark this problem. Again, we found instance segmentation annotation is laborious, it took more than 4000 man-hours on building this dataset. Some representative real images and annotation are shown in Fig. 6. As we can see, annotating them is time-consuming.

7 Evaluation

In this section, we evaluate our generated instance segmentation samples quantitatively and qualitatively.

7.1 Evaluation on Instance-60K

mAP 0.5 0.7
Mask R-CNN shelf real 79.75 67.02
rough 18.10 10.53
fake 49.11 37.56
fake 66.31 47.25
desk real 88.24 73.75
rough 43.81 35.14
fake 57.07 45.44
fake 82.07 71.82
tote real 90.06 85.10
rough 28.67 16.87
fake 61.40 50.13
fake 82.69 76.84
Table 1: mAP results on real, rough, fake, fake models of different scenes with Mask R-CNN.
Figure 7: Refinement of GAN. Refined column is the result of GeoGAN and rough column is the rendered image. Apparent improvement on lighting conditions and texture can be observed.

We employed instance segmentation tasks to evaluate on generated samples. To prove that the proposed pipeline generally works, we will report results using Mask R-CNN [10].

We train segmentation model on resulting images produced by our GeoGAN. The trained model is denoted as “fake-model”. Likewise, model trained on rough images is denoted as “rough-model”. One question we should ask is that how about “fake-model” compare to models train on real images. To answer this question, we train segmentation models on training set of instance-60K dataset, which is denoted as “real-model”. It is pre-trained on COCO dataset [6].

Training procedures on real images strictly follow the procedures mentioned in [10]. We find the learning rate for real images is not workable to rough and GAN generated images, so we lower the learning rate and make it decay earlier.

All models are trained with 4500 images, though we can generate endless training sample for “rough-model” and “fake-model”, since “real-model” only can train on 4500 images in the training set of instance-60K dataset. Finally, all models are evaluated on testing set of instance-60K dataset.

Figure 8: Qualitative results visualization of rough, fake, fake and real model respectively.

Experiment results shown in Tab. 1. Overall mAP of the rough image is generally low, while “fake-model” significantly outperformed it. Noticeably, it still has a clear gap between “fake-model” results and real one, though the gap has been bridged a lot.

Naturally, we would like to know how many refined training images is sufficient to achieve comparable results with “real-model”. Hence, we conducted experiments on 15000 GAN generated images, and named model as “fake-model”. As we can see from Tab. 1, “fake” and “real” is really close. We try to augment more training samples to “fake-model”, but, the improvement is marginal. In this way, our synthetic “images + annotation” is comparable with “real image + human annotation” for instance segmentation.

The results for real-model may imply that our instance-60K is not that difficult for Mask R-CNN. Extension of the dataset is on-going. However, it is undeniable that the dataset is capable of proving the ability of GeoGAN.

In contrast to exhausting annotation using over 1000 human-hours per scene, our pipeline takes 0.7 human-hours per scene. Admittedly, the results suffer from performance loss, but save the whole task 3-order of human-hours.

7.2 Comparison With Other Domain Adaptation Framework

Previous domain adaptation framework focus on different tasks, such as gaze and hand pose estimation [22], object classification and pose estimation [23]. To the best of our knowledge, we are the first to propose a GAN-based framework to do instance segmentation. Comparison with each other is indirect. We reproduced the work of [22] and [23]. For [23], we substituted the task component with our P. The experiments are conducted on the scenes same in the paper. Results are shown in Fig.9 and Tab.2.

mAP 0.5 0.7
Mask R-CNN shelf fake 66.31 47.25
fake 31.46 20.88
fake 56.16 36.04
desk fake 82.07 71.82
fake 44.33 29.93
fake 69.54 57.27
tote fake 82.69 76.84
fake 42.50 33.61
fake 70.73 62.68
Table 2: Quantitative comparison of our pipeline and [23], [22].
Figure 9: Qualitative comparison of our pipeline and [23], [22]. The background of generated images from [23] are damaged since they use a masked-PMSE loss.

7.3 Ablation Study

Ablation study is carried out by removing geometry-guided loss and structure loss separately. We applied Mask R-CNN to train the segmentation models on resulting images from GeoGAN without geometry-guided loss (denoted as “fake-model”) or structure loss (denoted as “fake-model”). As we can see, it suffers a significant performance loss when removing geometry-guided loss or structure loss. Besides, we also need to prove the necessity of reasoning system. After removing reasoning system, resulting in unrealistic images and performance loss. Results are shown in Tab. 3.

mAP 0.5 0.7
Mask R-CNN shelf fake 66.31 47.25
fake 48.52 31.17
fake 27.33 19.24
fake 15.21 8.44
desk fake 82.07 71.82
fake 63.99 55.23
fake 45.05 34.51
fake 18.36 9.71
tote fake 82.69 76.84
fake 64.22 53.31
fake 46.44 35.62
fake 20.05 12.43
Table 3: mAP results of ablation study on Mask R-CNN.
Figure 10: Samples to illustrate the efficacy of structure loss, geometry-guided loss in GeoGAN and reasoning system in our pipeline.

8 Limitations and Future Work

If the environmental background changes dynamically, we should scan a large number of environmental backgrounds to cover this variance and take much effort. Due to the limitations of the physics engine, it is hard to handle highly non-rigid objects such as a towel. For another limitation, our method does not consider illumination effects in rendering, since it is much more complicated. GeoGAN that transfers illumination conditions of the real image may partially address this problem, but it is still imperfect. In addition, the size of our benchmark dataset is relatively small in comparison with COCO. Future work is necessary to address these limitations.

References

  • [1] Dai, J., He, K., Sun, J.: Instance-aware semantic segmentation via multi-task network cascades.

    In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (June 2016) 3150–3158

  • [2] Li, Y., Qi, H., Dai, J., Ji, X., Wei, Y.: Fully convolutional instance-aware semantic segmentation. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2017)
  • [3] Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems (NIPS). (2015)
  • [4] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Computer Vision and Pattern Recognition. (2015) 3431–3440
  • [5] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.:

    The cityscapes dataset for semantic urban scene understanding.

    In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2016)
  • [6] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European Conference on Computer Vision. (2014) 740–755
  • [7] Zhu, Y., Mottaghi, R., Kolve, E., Lim, J.J., Gupta, A., Fei-Fei, L., Farhadi, A.:

    Target-driven visual navigation in indoor scenes using deep reinforcement learning.

    (2016)
  • [8] Tzeng, E., Devin, C., Hoffman, J., Finn, C., Peng, X., Levine, S., Saenko, K., Darrell, T.: Towards adapting deep visuomotor representations from simulated to real environments. Computer Science (2015)
  • [9] Rusu, A.A., Vecerik, M., Rothörl, T., Heess, N., Pascanu, R., Hadsell, R.: Sim-to-real robot learning from pixels with progressive nets. (2016)
  • [10] He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. arXiv preprint arXiv:1703.06870 (2017)
  • [11] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. (2014) 2672–2680
  • [12] Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks
  • [13] Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (ICCV). (2017)
  • [14] Wu, J., Zhang, C., Xue, T., Freeman, B., Tenenbaum, J.: Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In: Advances in Neural Information Processing Systems. (2016) 82–90
  • [15] Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.:

    Image-to-image translation with conditional adversarial networks.

    In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (July 2017)
  • [16] Chen, Q., Koltun, V.: Photographic image synthesis with cascaded refinement networks. In: IEEE International Conference on Computer Vision (ICCV). (2017)
  • [17] Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation. (2016)
  • [18] Yi, Z., Zhang, H., Gong, P.T., et al.: Dualgan: Unsupervised dual learning for image-to-image translation. In: IEEE International Conference on Computer Vision (ICCV). (2017)
  • [19] Kim, T., Cha, M., Kim, H., Lee, J., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: IEEE International Conference on Computer Vision (ICCV). (2017)
  • [20] Benaim, S., Wolf, L.: One-sided unsupervised domain mapping. In: Advances in neural information processing systems. (2017)
  • [21] Sixt, L., Wild, B., Landgraf, T.: Rendergan: Generating realistic labeled data. arXiv preprint arXiv:1611.01331 (2016)
  • [22] Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2017)
  • [23] Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (July 2017)
  • [24] Su, H., Qi, C.R., Li, Y., Guibas, L.J.: Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views. In: The IEEE International Conference on Computer Vision (ICCV). (December 2015)
  • [25] Georgakis, G., Mousavian, A., Berg, A.C., Kosecka, J.: Synthesizing training data for object detection in indoor scenes. arXiv preprint arXiv:1702.07836 (2017)
  • [26] Ros, G., Sellart, L., Materzynska, J., Vazquez, D., Lopez, A.: The SYNTHIA Dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2016)
  • [27] Alhaija, H.A., Mustikovela, S.K., Mescheder, L., Geiger, A., Rother, C.: Augmented reality meets deep learning for car instance segmentation in urban scenes. In: Proceedings of the British Machine Vision Conference. Volume 3. (2017)
  • [28] Handa, A., Pătrăucean, V., Stent, S., Cipolla, R.: Scenenet: An annotated model generator for indoor scene understanding. In: IEEE International Conference on Robotics and Automation. (2016) 5737–5743
  • [29] Mccormac, J., Handa, A., Leutenegger, S., Davison, A.J.: Scenenet rgb-d: 5m photorealistic images of synthetic indoor trajectories with ground truth. (2017)
  • [30] Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. (2016)
  • [31] Fisher, M., Ritchie, D., Savva, M., Funkhouser, T., Hanrahan, P.: Example-based synthesis of 3d object arrangements. Acm Transactions on Graphics 31(6) (2012) 135
  • [32] Merrell, P., Schkufza, E., Li, Z., Agrawala, M., Koltun, V.: Interactive furniture layout using interior design guidelines. In: ACM SIGGRAPH. (2011)  87
  • [33] Fuhrmann, S., Langguth, F., Goesele, M.: Mve-a multi-view reconstruction environment. In: GCH. (2014) 11–18
  • [34] Tzionas, D., Gall, J.: 3d object reconstruction from hand-object interactions. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 729–737
  • [35] Tasora, A., Serban, R., Mazhar, H., Pazouki, A., Melanz, D., Fleischmann, J., Taylor, M., Sugiyama, H., Negrut, D.: Chrono: An open source multi-physics dynamics engine. Springer (2016) 19–49
  • [36] Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. (2016)
  • [37] Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: International Conference on Neural Information Processing Systems. (2014) 2366–2374
  • [38] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition. (2016) 770–778
  • [39] Li, C., Wand, M.: Precomputed real-time texture synthesis with markovian generative adversarial networks. In: European Conference on Computer Vision. (2016) 702–716
  • [40] Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z.:

    Photo-realistic single image super-resolution using a generative adversarial network.

    (2016)
  • [41] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. 9351 (2015) 234–241
  • [42] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. Computer Science (2014)
  • [43] Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html
  • [44] Greene, M.R.: Estimations of object frequency are frequently overestimated. Cognition 149 (2016) 6–10

9 Appendix

9.1 Extended Ablative Study

We add experiments to address how each geometry information in the geometry path affects the results. As shown in table 4, model name with ”w/o” means remove corresponding geometric information, while name without ”w/o” means only corresponding geometric information is used for geometry path. As we can see, geometry path have some contributions for the final results, but not as much as geometry loss.

mAP 0.5 0.7
Mask R-CNN shelf fake 66.31 47.25
fake 43.68 27.40
fake 57.80 39.52
fake 55.45 36.18
fake 55.23 37.89
fake 61.12 42.09
fake 62.87 46.26
fake 58.05 41.77
Table 4: extended ablative experiments on geometric information.

9.2 Knowledge Acquiring With Many Or One

In the experiment, knowledge (pose prior, location prior and relationship prior) annotated on objects can be acquired with ease. One or two people are more than enough to handle the workload. Nonetheless, annotation in this way seems subjective at the first glance. What if one annotator thinks object A should stands upright in scene B, which the other thinks otherwise. We admit the cognitive bias exists as pointed out by [44]. However, to our surprise, experiments show that such bias does not have that significant influence on the domain adaption results (See Fig. 11) and segmentation results (See Tab. 5).

Figure 11: Sample rough images generated from layouts which are synthesized by different people’s annotation, and associated fake images. Annotator ID: (a)1, (b)3, (c)7, (d)8, (e)11, (f)18.
mAP 0.5 0.7
Annotator 1 shelf fake 66.31 47.25
desk fake 82.07 71.82
tote fake 82.69 76.84
Annotator 2 shelf fake 65.62 41.48
desk fake 81.52 72.06
tote fake 82.14 75.94
Annotator 3 shelf fake 62.18 51.61
desk fake 81.91 72.39
tote fake 79.02 64.49
Annotator 4 shelf fake 66.22 57.03
desk fake 81.08 74.09
tote fake 81.78 70.37
Annotator 5 shelf fake 63.27 52.53
desk fake 81.54 72.45
tote fake 82.89 73.27
Annotator 6 shelf fake 66.05 46.90
desk fake 80.17 73.30
tote fake 78.12 63.76
Annotator 7 shelf fake 65.33 50.15
desk fake 80.08 68.18
tote fake 81.94 70.27
Annotator 8 shelf fake 66.37 53.67
desk fake 79.69 66.24
tote fake 82.74 64.20
Annotator 9 shelf fake 62.51 52.48
desk fake 77.32 68.47
tote fake 82.35 70.71
Annotator 10 shelf fake 64.03 57.93
desk fake 78.77 70.42
tote fake 77.89 69.11
Annotator 11 shelf fake 66.14 52.13
desk fake 81.02 74.66
tote fake 80.36 68.57
Annotator 12 shelf fake 68.45 49.90
desk fake 82.62 73.46
tote fake 81.80 68.15
Annotator 13 shelf fake 66.23 56.17
desk fake 78.42 65.43
tote fake 79.42 68.80
Annotator 14 shelf fake 64.16 51.81
desk fake 80.10 66.98
tote fake 79.65 65.01
Annotator 15 shelf fake 66.39 49.60
desk fake 81.40 67.20
tote fake 82.51 71.63
Annotator 16 shelf fake 66.21 54.94
desk fake 79.91 71.88
tote fake 81.83 72.17
Annotator 17 shelf fake 65.46 51.92
desk fake 81.45 71.64
tote fake 82.19 69.23
Annotator 18 shelf fake 61.83 52.75
desk fake 80.59 68.95
tote fake 77.71 63.43
Annotator 19 shelf fake 69.15 51.08
desk fake 79.25 65.11
tote fake 81.74 67.75
Annotator 20 shelf fake 66.78 57.10
desk fake 76.81 68.52
tote fake 82.27 69.93
Table 5: Results of instance segmentation tasks where training samples generated by priors of 20 annotators. Segmentation tasks are conducted by Mask R-CNN.

To make sure the diversity, we summoned 20 people with different ages, genders, nationalities to annotate the objects parallel, and then generated the scene layout accordingly.

The reason why different priors lead to insignificant difference, as we assume, is that despite that different people have different preferences on how to place an object in a given scene, they agree on what pose, location or relationship is possible. There are also extreme cases when one person thinks one form of placement never happens, and others think not (i.e. whether a drink bottle can be placed upside down). But in general, they achieved an agreement unconsciously. Thus, the distribution of layouts generated under diverse preferences are not very different. Another reason might be that current CNN networks are proved to have sufficient capabilities to cope with these minor differences of pose, location and relationship. And at the same time, some data augmentation techniques incorporated default can also facilitate to further reduce the impacts.

9.3 GAN Refined Results

More refined results are listed in Figure 12. And Figure 13 shows more visualized details of the GeoGAN structure.

Figure 12: More GAN refined results. They are naturally paired with pixel-wise accuracy segmentations.
Figure 13: Visualization of GeoGAN architecture.