Learning to simulate complex scenes

06/25/2020 ∙ by Zhenfeng Xue, et al. ∙ Zhejiang University Australian National University 0

Data simulation engines like Unity are becoming an increasingly important data source that allows us to acquire ground truth labels conveniently. Moreover, we can flexibly edit the content of an image in the engine, such as objects (position, orientation) and environments (illumination, occlusion). When using simulated data as training sets, its editable content can be leveraged to mimic the distribution of real-world data, and thus reduce the content difference between the synthetic and real domains. This paper explores content adaptation in the context of semantic segmentation, where the complex street scenes are fully synthesized using 19 classes of virtual objects from a first person driver perspective and controlled by 23 attributes. To optimize the attribute values and obtain a training set of similar content to real-world data, we propose a scalable discretization-and-relaxation (SDR) approach. Under a reinforcement learning framework, we formulate attribute optimization as a random-to-optimized mapping problem using a neural network. Our method has three characteristics. 1) Instead of editing attributes of individual objects, we focus on global attributes that have large influence on the scene structure, such as object density and illumination. 2) Attributes are quantized to discrete values, so as to reduce search space and training complexity. 3) Correlated attributes are jointly optimized in a group, so as to avoid meaningless scene structures and find better convergence points. Experiment shows our system can generate reasonable and useful scenes, from which we obtain promising real-world segmentation accuracy compared with existing synthetic training sets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 8

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Collecting and annotating large-scale datasets (Deng et al., 2009; Geiger et al., 2013a; Zheng et al., 2015) consumes much time and manpower. This is especially true for semantic segmentation, where high-quality annotation is reported to require 60 or 90 minutes for an image (Brostow et al., 2009; Cordts et al., 2016). For this problem, data synthesis through graphic engines (Richter et al., 2016; Gaidon et al., 2016; Ros et al., 2016) has recently become a promising solution due to its convenience of acquiring ground truths at a large scale. This strategy also enables us to simulate corner cases that are not well covered by mainstream datasets. Besides, model testing in virtual environment (Dosovitskiy et al., 2017; Kolve et al., 2017; Wu et al., 2018) is safe and economic. The challenge lies in the domain gap between synthetic and real-world data that leads to performance drop.

Synthetic-to-real domain adaptation is a popular way to narrow the domain gap (Zou et al., 2018; Tsai et al., 2018; Li et al., 2018; Hoffman et al., 2017; Hoffman et al., 2016). These methods attempt to solve this problem from two aspects, i.e., appearance level and feature level. For the former, stylized synthetic images are generated to resemble those captured in real world (Hoffman et al., 2017; Li et al., 2018; Chen et al., 2019) using some GAN-based methods (Goodfellow et al., 2014; Zhu et al., 2017; Isola et al., 2017; Huang et al., 2018). For the latter, the feature distributions among two domains are aligned (Hoffman et al., 2016; Tsai et al., 2018; Luo et al., 2019).

Several latest works reveal that there exists an under-explored but important aspect that causes the domain gap - the content difference (Prakash et al., 2019; Ruiz et al., 2018; Kar et al., 2019; Yao et al., 2019). Here, content may refer to building density, vehicle occlusions, illumination, etc, which is different from the style gap and can hardly be addressed by most style- or feature-level adaptation methods, and required to be manually solved within virtual environment (Geiger et al., 2013a). To reduce manual effort, an emerging feasible solution is learning-based simulation (Ruiz et al., 2018; Kar et al., 2019; Yao et al., 2019). To align the content between two domains, this strategy updates attribute values based on supervision signals such as distribution difference (Gretton et al., 2012; Heusel et al., 2017; Yao et al., 2019; Kar et al., 2019) or task loss (Kar et al., 2019; Ruiz et al., 2018). This optimization process differs significantly from the traditional gradient based ones, mainly because the system is non-differentiable. In fact, the rendering function of the graphic engine is not known and thus is non-differentiable. Moreover, when calculating the task loss, the task model should be trained until convergence, and this process is non-differentiable, either. The high system complexity poses critical challenges if we aim to generate complex environments such as the street scenes.

Figure 1. An overview of the proposed framework. During simulation learning, the attribute values are randomly initialized and fed into the policy network. The output of policy network is updated attribute values that are sent into the simulator (Unity) to render synthetic data. The synthetic data is used to train a segmentation model, and we use the model accuracy (mIoU) on real-world test set as reward to update the policy network. During inference, randomly initialized attributes are fed into the learned policy network, and the output attribute values are used to generate the optimized synthetic dataset.

We are thus interested in synthesizing large and complex scenes using the graphic engine and tackling the high computation problem with a relatively scalable approach. Existing approaches in this domain encounter difficulties when synthesizing large scenes, as compared in Table 1. First, many methods optimize instance-level attributes, such as position and scale of each object (Dosovitskiy et al., 2017; Prakash et al., 2019; Tobin et al., 2017). When a complex scene contains many objects, this practice will lead to a huge search space at the stage of scene structure optimization. Yao et al. (Yao et al., 2019) do not use instance-level attributes, but their method is designed for simple bounding boxes with a vehicle in the central and encounters efficiency problems under large scenes (Fig. 5). Second, in (Kar et al., 2019; Ruiz et al., 2018), the search space for every attribute is continuous, which requires the REINFORCE algorithm to sample in a large range value. As the number of attribute increases, the search space becomes extremely large and the training complexity increases heavily. Third, the attributes are usually optimized sequentially (Kar et al., 2019) or independently (Yao et al., 2019). These methods do not comprehensively consider the correlation among multiple attributes, and may cause object collisions under complex scenes (Fig. 6).

Method Attribute type Search space Attribute correlation Scene scale
(Kar et al., 2019) instance continuous partial medium
(Ruiz et al., 2018) instance continuous partial medium
(Yao et al., 2019) global discrete none simple
Ours global discrete yes complex
Table 1. Differences with existing methods.

This paper proposes a scalable discretization-and-relaxation (SDR) data synthesis approach tailored for complex street scenes, so that a semantic segmentation model can be trained. An overview of the proposed framework is shown in Fig. 1. In a nutshell, our system uses a policy network to take proper actions to sample the optimal values of engine attributes, and the segmentation accuracy on real-world test sets serves as the system reward. This type of pipeline has also been adopted by existing works (Kar et al., 2019; Ruiz et al., 2018; Yao et al., 2019). The distinct feature of our system consists of its scalability: efficient and effective optimization procedure is achieved. Specifically, our method addresses the three problems mentioned above. First, instead of using instance-level attributes, we build our system to accommodate global attributes, such as building density, lighting intensity, etc. Our intuition is that global attributes would have large influence on the scene structure; moreover, there are much fewer global attributes compared with the large number of instance attributes, so we are facing a much smaller search space. Then, to reduce the search space caused by continuous attribute value sampling, SDR quantizes the attribute values into discrete values. To remedy the loss of randomness in attribute values, we add a relaxation step, i.e.,

we manually inject variance on top of the discrete values.

Finally, the discretization process allows us to jointly optimize a group of attributes while maintaining a relatively low computational complexity. The joint optimization considers the correlation among attributes and can yield reasonable scene structures.

We perform the proposed optimization method on a new data synthesis platform named SceneX. It contains 19 classes of objects compatible with the mainstream semantic segmentation datasets, and to ensure diversity most classes have a rich range of 3D models. From SceneX, the proposed SDR method can generate a high-quality database where pixel-level annotations are accurately and automatically obtained. Comparing with existing synthetic datasets such as GTA5 and SYNTHIA that are manually designed, we show that optimized SceneX can yield very promising segmentation accuracy on real-world test data.

2. Related Work

GAN based data generation. GAN based data generation methods (Hoffman et al., 2017; Isola et al., 2017; Zhang et al., 2018) focus on adjusting the style of synthetic images to approximate real-world images. For this kind of method, image (pixel)-level domain adaptation (Zhu et al., 2017)

is a commonly used approach and has been proven to be effective in several works such as Pix2Pix

(Isola et al., 2017), MUNIT (Huang et al., 2018), WCT (Li et al., 2017) and SPGAN (Deng et al., 2018). To generate data that has similar appearance with target data, Zhang et al. (Zhang et al., 2018) use an appearance adaptation network to gradually generate a stylized image from a noise image by adapting the appearance to target data. Chen et al. (Chen et al., 2019) further propose an input-level adaptation network that leverages the depth information to reconstruct the source image. It employs an adversarial learning (Goodfellow et al., 2014) framework to ensure style consistency between source and target domains.

Graphic engine based data generation. Graphic engine based methods (Richter et al., 2016; Gaidon et al., 2016; Geiger et al., 2013b) use the simulated 3D models, such as person (Barbosa et al., 2018), object (Pepik et al., 2012) and scene (Satkin et al., 2012; Geiger et al., 2013b), as well as varying virtual environments to render and generate synthetic data. On one hand, there is some works setting the conditions of generating data manually or randomly (Tremblay et al., 2018; Prakash et al., 2019). For example, Hattori et al. (Hattori et al., 2015) generate data by tuning the scene to match specific scene artificially to help detect pedestrians in real data. On the other hand, several recent researches propose to synthesize data by learning-based simulation methods (Kar et al., 2019; Ruiz et al., 2018; Heusel et al., 2017; Prakash et al., 2019; Gretton et al., 2012). For example, Ruiz et al. (Ruiz et al., 2018) propose reinforcement learning-based method for adjusting the parameters of synthesized data to mimic KITTI (Geiger et al., 2012) and get significant performance improvement of synthetic data using learned parameters than random parameters. Yao et al. propose to use FID (Heusel et al., 2017) metric with attribute descent to optimize the attributes of synthetic data for vehicle re-ID and get improved recognition results by using the optimized data than random attribute setting data.

Learning from synthetic data. Based on the convenience of annotation of synthetic data, some datasets have been created to help learn models for related researches. For example, Richter et al. (Richter et al., 2016) create a pixel-level annotated dataset including 24,966 images through playing in the game Grand Theft Auto V for semantic segmentation. Gaidon et al. (Gaidon et al., 2016) build a synthetic video dataset - Virtual KITTI that mimics KITTI (Geiger et al., 2013b) to support bigger benchmark datasets for the visual community. Bak et al. (Bak et al., 2018) introduce a synthetic dataset SyRI including 100 characters with rich lighting conditions to learn robust person re-identification models for illumination variations. On the other hand, some work learns from synthetic data based on its controllability to investigate some problem of specific conditions (Dosovitskiy et al., 2017; Sun and Zheng, 2019; Sakaridis et al., 2018). For example, Sun et al. (Sun and Zheng, 2019) discuss the influence of person viewpoint changes on person re-ID systems and Sakaridis et al. (Sakaridis et al., 2018) build a Foggy Cityscapes dataset to learned semantic segmentation and object detection models that have improved performance on challenging real foggy scenes.

3. SceneX: Complex Scene Generator

Existing simulators (Dosovitskiy et al., 2017; Prakash et al., 2019; Tobin et al., 2017) are not very suitable to construct complex scenes for two reasons. First, the number of classes in the simulators is limited, so it is not feasible to perform segmentation on a rich range of real-world objects. Second, they define attribute at the instance level, e.g., the position and orientation of each object. Under this setting, we can only edit individual objects. Given a large number of objects, the editing space is huge and intractable. In this section, we first describe the 3D assests in ScenceX corresponding to a standard 19 classes. We then introduce how SceneX is rendered in Unity, which allows us to optimize global attributes instead of instance-level attributes used in previous works.

3.1. 3D Scene Classes and Assets

SceneX contains 19 classes of assets. Its classes are the same with Cityscapes (Cordts et al., 2016), i.e., car, pedestrian, building and etc. It is designed to generate street scenes from a first-person driver perspective. Specifically, SceneX contains 200 pedestrian models, 195 cars, 28 buses and 39 trucks from existing model repositories (Sun and Zheng, 2019; Yao et al., 2019), and makes necessary modifications so that they are compatible with our engine. Besides, we collect 106 buildings, 18 bicycles, and 19 trees, among others. There are also 14 sky box models to simulate different weather conditions. With these models, SceneX can generate complex scenes with a rich range of objects.

Figure 2. Example global attributes. (i) Illumination intensity changes the brightness of virtual environment, and (ii) distance between buildings and road changes the position of a group of buildings and (iii) interval of buildings changes the density of buildings.

3.2. Engine Design and Global Attributes

We aim to simulate scenes that contain many objects. Existing data simulation works generally are not very well designed to handle this problem. For example, Kar et al. optimize the attributes of each object (Kar et al., 2019), which leads to a prohibitively large search space if the scene contains many objects. To accomplish our goal, we propose a different strategy regarding attribute manipulation. Details of engine design and global attributes are provided below.

Engine design. Our data synthesis engine is featured by a “line-based” design. To the center of a scene is a road map, around which 19 types of objects are placed. In the line-based design, the same types of objects (e.g., bicycles) are placed along a line parallel to the road, and thus these objects have the same distance to the road. Because objects on a line are tied, changing the position of an object means to move all the objects placed by the same magnitude on that line. This object placement strategy not only allows us to easily adjust the distance between objects and the road, but also enables precise object density changes by modifying the interval between objects on the same line. Therefore, when considering a single type of object (e.g., persons), its distribution within a scene is determined by its distance to the road and its density.

Global attributes. We use 23 global attributes to control the scene structure, including 8 for environment, 7 for object position and 8 for object density. We intuitively select these global attributes as they have large influence on the overall scene property. Examples of the global attributes are shown in Fig. 2. Among them, illumination intensity changes the brightness of virtual environment, which affects the visibility of objects. Distance between buildings and the road changes the position of a group of buildings, and building interval changes the density of buildings along the line. Fig. 2 shows that by editing the values of global attributes (only a few parameters), the scene structure / appearance can be significantly changed. The advantages of using global attributes is discussed in Section 4.4.

As a controllable system, the attributes of SceneX are editable. Like (Yao et al., 2019), we build a Python API using Unity ML-Agents plugin (Juliani et al., 2018). It allows us to modify the attributes directly through Python programming without needing expert knowledge about Unity. We refer readers to the appendix for more details of our engine.

4. Proposed Method

4.1. Problem Formulation

Suppose we have a target dataset that is divided into two parts, i.e., a validation set and a test set , where and are a set of images and their segmentation labels, respectively. Our objective is to train a policy network , parameterized by , which takes a set of randomly sampled attributes as input and outputs a set of updated attributes . Inputting into SceneX, the engine will render a synthetic dataset , where contains images, and represents the corresponding pixel-wise labels automatically acquired through rendering engine buffer. After training a segmentation network (parameterized by ) on till convergence, we compute the accuracy on . The accuracy score is used to update the policy network, enforcing it to increase the accuracy on , and thus on . In short, the training process poses a bi-level optimization problem, i.e.,

(1a)
(1b)

For two reasons, solving this problem with a gradient-based approach is not feasible. First, the mathematical rendering function of Unity is unknown. Second, the accuracy score is computed upon a training-till-convergence process. That is, we have to train the segmentation model till convergence before obtaining the segmentation accuracy as supervision signal.

4.2. Scalable Discretization-and-Relaxation

In order to optimize the attributes of SceneX, we propose a scalable discretization-and-relaxation (SDR) optimization method. It follows a reinforcement learning framework, and employs a neural network to map random attribute values to updated ones.

The reinforcement learning framework. Similar to (Kar et al., 2019; Ruiz et al., 2018), our overall architecture adopts the REINFORCE algorithm (Williams, 1992) to tackle the system’s being non-differentiable. The REINFORCE algorithm optimizes the problem through a sampling process, forcing it to maximize the following expected reward,

(2)

with respect to . is the accuracy score computed on the validation set , where are updated attributes that are sampled from

. An unbiased, empirical estimate of the gradients for updating the policy is,

(3)

where is a baseline that is usually chosen to be exponential moving average over previous rewards, and is the number of different datasets sampled in one policy.

Figure 3. The proposed SDR method. Random attributes are fed into the policy network, outputting discrete values. Then we manually inject variance on top of these values. Final output values are used to update the policy network.

Discretization-and-relaxation with MLP.

We view this attribute optimization task as a distribution mapping problem that maps random attribute values to optimized ones. In order to achieve our goal, we employ a multi-layer perceptron (MLP) to building a mapping function between random and updated attribute values. The MLP optimizes the attribute values through a

discretization process. Specifically, suppose the input of MLP is a

dimensional vector that represents random attribute values, where

denotes attribute number. The corresponding output is a dimensional vector with a softmax function applied to the second dimension, where denotes discrete number. Under this form, we quantize each attribute into

discrete values, with a probability distribution over

numbers. Each value is then sampled from the numbers during training, or determined by maximum probability during testing. The sampled outputs are regarded as updated attribute values, with a dimension of , and their probabilities are known so as to update the policy. In order to remedy the loss of attribute diversity caused by discretization, a relaxation process is added afterwards, that is we manually inject variance on top of the updated attributes. An overview of the proposed SDR method is illustrated in Fig. 3.

Method scalability. Our method is scalable in the following three aspects. First, the attribute number is scalable, since we can change the input dimension. That means the change of attribute number to be optimized jointly each time. Second, the discrete number is scalable, since we can change the output dimension. That means the change of refinement degree of output values for each attribute. Third, the discrete number for each attribute is scalable, since we can assign different output layers for different attributes. That means to assign different discrete numbers (i.e., refinement degree) for different attributes.

4.3. SDR in Groups

As the number of attribute increases, the performance of REINFORCE algorithm drops rapidly due of the increasing sampling space. In order to tackle it, we propose to use SDR in groups. That is, we split the attributes into several groups and optimize them using SDR in the form of coordinate descent (Wright, 2015).

Specifically, the input of MLP is a dimensional vector . The corresponding output is a dimensional vector , representing the sampling space of updated attributes. In order to use SDR in groups, is split into parts, i.e., , where is a dimensional vector and . The corresponding output vector is , where is a dimensional vector. Sampling from can get with dimension of . After concatenating , we can get the updated attributes . The policy model is also split into models during this process, resulting into . Each model is optimized under following equations:

(4)
Figure 4. SDR in groups. We split the input, output and the policy network into several groups, and optimize each group by SDR using coordinate descent.

4.4. Discussion

Difference with attribute descent (Yao et al., 2019). Attribute descent can be viewed as a special case of the proposed SDR. That is, if we replace the policy model in SDR with the brute force search and set the number of groups to ( is the total number of attributes), our method will reduce to attribute decent. Attribute descent optimizes each attribute independently, while our method jointly optimizes attributes in a group, thus considering their dependencies. Besides, the algorithm complexity of attribute descent is , so it is time-consuming when the number of attributes is large. In comparison, the complexity of SDR is , where is the group number, which is much smaller.

Difference with learning to simulate (Ruiz et al., 2018) and meta-sim (Kar et al., 2019). The difference with these two methods lies in the usage of policy. In (Ruiz et al., 2018; Kar et al., 2019), A Gaussian model is used as the policy, where the parameters to be learned are the mean and variance. This strategy requires the policy to sample within a large range value, so it is very sensitive to the initial value. Moreover, using the Gaussian model means that only continuous attributes can be optimized. As a result, these two methods perform sampling in a huge search space. Moreover, our method departs from these two methods in that we use global attributes rather than instance-level ones. The advantages of using global attributes are discussed below.

Why global attributes? The advantages of using global attributes over instance-level attributes are two-fold. First, global attributes more directly represent the characteristics of a scene. For example, by directly manipulating the density of pedestrians and cars, urban and rural areas can be better characterized. By decreasing the distance between buildings and road, we can directly mimic the situation in a modern city. In comparison, manipulating the location of individual cars and persons apparently gives much less direct impact on the overall structure and significantly increases system computational burden.

Second, the search space of global attribute is much smaller. Suppose all the objects are placed on a two-dimensional map. The search space for an individual object is , where and represent the search space (in pixels) along and axis, respectively. Suppose a scene has classes of objects, and that is number of objects for each class. The search space of the scene is if the scene structure optimization is performed under instance level. In comparison, for global-level attribute optimization, the search space is significantly reduced to since we place the same type of objects on a line, where represents the range of object density and is smaller than . A reduced search space allows our method to operate efficiently and converge to a superior state.

5. Experiment

In this section, we first compare SDR with other attribute optimization methods. Then, we compare the effectiveness as a training set of our simulated images with existing synthetic datasets GTA5 and SYNTHIA. Besides, we show that our simulated data is beneficial for pre-training. Finally, we verify the necessity of each component in the proposed SDR method.

(a) Synthetic dataset Cityscapes
Dataset Size Net

Road

SW

Build

Wall

Fence

Pole

TL

TS

Veg

Terr

Sky

PR

Rider

Car

Truck

Bus

Train

Motor

Bike

mIoU
SYNTHIA 9.4k FCN8s 37.0 22.8 63.5 0.1 0 4.8 0 0 71.1 0 73.1 35.1 4.6 25.7 0 6.4 0 0 0 18.1
GTA5 24.9k 34.3 16.3 69.2 12.8 12.0 7.7 0 0 75.7 15.7 65.5 26.9 0 38.2 10.4 1.8 0 0 0 20.3
SceneX+RA 450 51.6 17.8 48.4 0 0 1.5 0.8 0 61.9 6.2 13.5 0.3 1.5 0.1 1.3 2.1 0 0 2.0 10.9
SceneX+SDR 450 66.1 25.0 56.3 0.5 0 8.9 0.5 0 68.8 7.0 41.7 21.5 10.6 28.8 6.0 0.3 0 0 12.6 18.6
SYNTHIA 9.4k DeepLabv2 45.4 18.6 66.8 15.2 10.8 16.6 11.6 0.6 77.1 16.7 65.3 39.6 2.1 49.9 8.9 11.6 0 5.5 0 18.3
GTA5 24.9k 12.3 19.9 40.8 1.6 0 15.5 0.7 3.9 75.3 0 70.6 38.3 2.4 44.9 0 14.0 0 0.2 6.6 24.2
SceneX+RA 450 63.8 10.4 56.1 0.1 0 3.1 2.1 0.1 57.1 0.9 20.7 5.2 1.7 6.4 1.4 1.6 0.3 0.9 1.1 12.3
SceneX+SDR 450 70.6 18.6 63.1 4.0 0.4 10.4 0.2 0.7 64.2 3.6 40.6 27.1 3.7 30.0 2.0 0 0 1.2 8.5 18.4
(b) Synthetic dataset CamVid
Dataset Size Net Sky Build Pole Road SW Tree Sign Fence Car PR Bike mIoU
SYNTHIA 9.4k FCN8s 81.6 65.2 0.7 63.6 42.1 47.4 0 0 46.9 19.1 0 33.3
GTA5 24.9k 75.6 67.8 0 66.4 43.3 56.8 0 0 53.8 0 0 33.1
SceneX+RA 450 42.7 56.0 1.1 35.9 44.1 41.3 0 0 25.3 0 0 22.4
SceneX+SDR 450 81.7 53.7 2.6 69.3 44.9 29.4 1.0 0 24.7 10.7 5.7 29.6
SYNTHIA 9.4k DeepLabv2 65.6 62.2 10.3 55.3 36.1 47.6 1.6 0 50.1 27.6 4.6 32.6
GTA5 24.9k 58.3 63.4 7.5 34.5 31.3 53.8 11.7 22.2 65.9 10.1 0 32.6
SceneX+RA 450 64.6 63.5 2.3 55.1 36.4 44.1 4.5 0 33.6 0 2.1 27.8
SceneX+SDR 450 68.1 55.6 7.2 60.1 45.8 42.1 12.2 0 35.7 14.2 4.2 31.4
Table 2. Segmentation accuracy on (a) Cityscape and (b) CamVid datasets. Four synthetic datasets are used for training, i.e., SYNTHIA, GTA5, SceneX by RA (random attributes), and SceneX by SDR (ours). Two networks are used, FCN8s and DeepLabv2. We highlight the numbers where our method SDR gives the highest accuracy for the corresponding class.

5.1. Experimental Setting

Datasets for attribute training and testing. We use two real-world datasets to train the attributes of SceneX. The Cityscapes dataset (Cordts et al., 2016) contains 2,975 images in the training set and 500 images in the validation set, all of size . We select 500 images from the training set for attribute training and calculate model accuracy on the validation set. We down-sample the images to during attribute training and during testing. For the pre-training experiment in Section 5.3, we fine-tune the pre-trained network on the training set at image resolution of , and report the results on the validation set at original size. The CamVid dataset (Brostow et al., 2008) contains 367 and 233 images for training and testing, respectively. We use the training set for attribute training and compute model accuracy on the test set. The dataset images have a fixed spatial resolution of , and we down-sample them to at all settings.

Datasets for Comparison. We compare our simulated dataset (named UnityScene) with two existing synthetic datasets, GTA5 (Richter et al., 2016) and SYNTHIA (Ros et al., 2016). GTA5 consists of 24,966 images with resolution of obtained from the GTA5 video game. The ground truth annotations are compatible with the Cityscapes dataset that contains 19 categories. SYNTHIA (Ros et al., 2016) is a dataset with synthetic images of urban scenes. The rendering covers a variety of environments and weather conditions. We adopt the SYNTHIA-RAND-CITYSCAPES subset that contains 9,400 images. The 19 categories in UnityScene, GTA5 and SYNTHIA are consistent.

Evaluation metric.

We use the commonly used mean intersection over union (mIoU) as evaluation metric.

Implementation details. For the policy network, we deploy a three-layer MLP with hidden dimension of 256, and output dimension () of 10. We use the Adam optimizer with a fixed learning rate of . The 23 attributes are first permutated and then manually grouped into 2-8 attributes in each group. Details of attribute grouping can be accessed in the supplementary material.

For the segmentation model, we deploy the widely used FCN8s (Long et al., 2015) and DeepLabv2 (Chen et al., 2017), which both adopt VGG16 (Simonyan and Zisserman, 2014) as backbone. During training, we use the SGD optimizer with a base learning rate of . Following Zhao et al. (Zhao et al., 2017), we deploy the poly learning rate decay by multiplying the factor .

We compute the accuracy score on the real-world validation set after training the segmentation model for 1000 iterations on simulated images and then update the policy network, which repeats for 50 times. It takes 2.2 and 3.1 seconds to obtain an image and its segmentation label for a spatial resolution of and

respectively on an AMD Ryzen Threadripper 2950X CPU. Besides rendering, we use one RTX 2080Ti GPU for deep learning experiment. The simulated dataset size is 180 in training process.

Figure 5. Method comparison on the Cityscapes validation set. (Top:) we choose to optimize 3, 5, and 7 correlated attributes, respectively. (Bottom:) we report the search time of optimizing 7 attributes. Four attribute learning methods are compared. SDR gives the best accuracy while consuming much less time than attribute descent and random search.

5.2. Comparative Study

Effectiveness of SDR over the random attributes baseline. We render datasets using random attributes and attributes optimized by SDR, respectively. We train FCN8s and DeepLabv2 on these datasets and compare the model performance on Cityscapes and CamVid datasets. Results are summarized in Table 2 and Fig. 5.

Figure 6. Examples of generated synthetic images by random attribute, attribute descent and SDR within SceneX. Random attribute randomizes object positions within a large range. Attribute descent tends to places visually obvious objects (e.g., building, tree) close to the road, resulting in severe overlap. In comparison, we observe that SDR place objects at more reasonable positions, such as person, rider on the sidewalk, tree on the terrain, and building away from the road for less occlusion.

It is clear from Table 2 and Fig. 5 that attributes learned through SDR are significantly superior to random attributes in synthesizing effective datasets. For example, on the Cityscapes dataset, the mIoU produced by our method (SceneX+SDR) is higher than random attributes (SceneX+RA) by +7.7% and +6.1% using FCN8s and DeepLabv2, respectively. Such an advantage exists in most classes. A similar trend can be observed on CamVid.

Comparing SDR with other attribute optimization methods. In Fig. 5, we compare SDR with several attribute learning methods, including attribute descent (Yao et al., 2019) and random search. Random search samples many set of attributes and gets the best attribute combination by brute force search. Because the compared methods are considered not scalable w.r.t the number of attributes, this experiment will optimize a fraction of the total 23 attributes. Specifically, we select 7 correlated attributes, i.e., 7 object position attributes (see Supplementary Material for details). Among them, we select 3, 5, and 7 attributes, forming three sets of experiment.

From the perspective of segmentation accuracy, we observe that SDR consistently outperforms the competing algorithms. Attribute descent does not consider attribute correlations and gives the lowest accuracy among the compared methods. It is even on par with random attributes baseline when optimizing 3 attributes. It indicates that when synthesizing complex scenes, it is of vital importance to consider attribute correlations, because various types of objects are closely related in the scene structure. In this regard, both SDR and random search consider attribute correlation via joint optimization. The difference is that random search faces a large search space (it does not have the grouping operation). Therefore, as the number of attributes increases, it gets harder for random search to find an appropriate attribute combination, so the performance gap between SDR and random search is larger under 7 attributes compared with 3 and 5 attributes.

From the view of efficiency, our optimization method converges faster than attribute descent and random search (saving 40% time), due to the discretization and grouping operations. Specifically, when optimizing 7 attributes, the time needed for SDR, attribute descent, and random search is 17h, 29h, 29h, respectively. When optimizing 23 attributes, our method takes 52h while time for the other two methods will increase proportionally.

We show examples of synthetic images using different methods within SceneX in Fig. 6. Random attribute randomizes object positions within a large range. Attribute descent tends to places visually obvious objects (e.g., buildings, trees) close to the road, resulting in severe overlap and a crowded scene. In comparison, SDR finds a more appropriate attribute combination, resulting in a more reasonable scene. For example, SDR places pedestrians and riders on the sidewalk, and trees on the terrain.

Comparing optimized SceneX with GTA5 and SYNTHIA as effective training sets. We respectively use GTA5, SYNTHIA, and SceneX (optimized by SDR) as training data, and use Cityscapes and CamVid as testing data in Table 2. We observe that SceneX (by SDR) produces promising accuracy: very competitive on Cityscapes compared with SYNTHIA, and slightly lower than SYNTHIA and GTA5 on CamVid. In important classes such as road, bicycle and rider, SceneX exhibits the highest segmentation accuracy.

For two understandable reasons, models trained with SceneX are not superior to those trained with SYNTHIA and GTA5. First, as shown in Table 2, SceneX only contains 450 images, much fewer than 24,966 and 9,400 in GTA5 and SYNTHIA, respectively. This is because SceneX has a limited number of 3D assests, which cannot support the content diversity of a database as large as several thousand. Second, because GTA5 and SYNTHIA images are collected from video games that were carefully designed by professionals, their 3D assests are much more realistic than SceneX. These two limitations will be addressed in our next version by including more diverse and realistic 3D models. Here we emphasize the advantages of SceneX is unparalleled: SYNTHIA and GTA5 only contain static images, while SceneX content can be freely edited. The strength of such content editability is obvious: only 450 images can provide very promising segmentation accuracy on real-world datasets.

Figure 7.

Evalutation of dataset abilities in pre-training. We compare ImageNet, SceneX+RA, and SceneX+SDR. The model is pretrained on synthetic data and fine-tuned and tested on Cityscapes and CamVid. We use statistical significance analysis to show the training stability. “*” means statistically significant (

i.e., 0.01¡p-value¡0.05) and “**” means statistically very significant (i.e., p-value¡0.01) respectively.

5.3. Simulation as Pre-training

Here we compare SceneX (SDR) with ImageNet and SceneX (random attributes, RA) their ability in model pretraining. We use the FCN8s as the segmentation model. Model fine-tuning is performed on the Cityscapes and CamVid datasets, respectively. Results are shown in Fig. 7. The results indicate that using SceneX+SDR for pre-training yields higher accuracy than ImageNet as well as SceneX+RA. This comparison is statistically significant. Besides, SceneX with random attributes also yields statistically higher accuracy than ImageNet. These results suggest that synthetic data optimized towards the target domain (i.e., Cityscapes or CamVid) have the potential to be a more effective source for model pre-training.

Figure 8. Ablation study for the SDR method. We remove the following components one at a time: relaxation (R), attribute grouping (G), considering attribute correlation (Corr), attributes for environments (AE), attributes for object position (AP), and attributes for object density (AD).

5.4. Ablation Study

In this section, we present the necessity of individual component in SDR. The details of ablation study are shown in Fig.8.

Relaxation is necessary. It adds variance to the discrete attribute values and thus increases the diversity of generated scenes. Removing the relaxation process leads to an mIoU drop of 3.5%.

Optimizing attributes in groups is beneficial. Without grouping, the search space becomes very large, and the algorithm may fall into inferior local optimums, causing the mIoU to drop by 7.9%.

Necessity of considering attribute correlation. Without manually grouping correlated attributes into the same group, mIoU will drop from 19.0% to 16.1%. Fig. 6 shows that unreasonable scene structures will be generated in this case.

Importance of different types of global attributes. Three types of attributes are optimized: those related to environment, object location and object density. If we remove each attribute category (they have 8, 7, and 8 attributes, respectively), the mIoU will drop by 3.4%, 1.2% and 2.1%, respectively. It indicates that the imaging condition, scene layout (object position) and density are essential to determining scene content.

Figure 9. Impact of the number of 3D assets and the number of simulated images on test accuracy. We report mIoU (%) on the Cityscapes validatoin set.

5.5. Important Parameters

Here, we analyze the impact of some important parameters in our data simulation method. The parameters include the number of 3D assets and simulated images.

As shown in Fig. 9, using all of the 3D assets within SceneX (full asset) always obtains higher accuracy than using half of the assets (half asset). This indicates the importance of the number of assets used in our engine. Since SceneX is extendable, it is beneficial to improve segmentation accuracy by adding more 3D models.

Besides, the number of simulated images also matters. Firstly, simulating few images (e.g., 180, 450) in case of half asset is harmful to the test accuracy. This may be caused by the lack of some good 3D models in this case. Secondly, as the number of simulates images increases, the test accuracy tends to be slightly increasing, except the case of 1350 images, which we guess is influenced by the training process. Overall, the test accuracy is stable around 19 % mIoU, which is quite a promising real-world segmentation accuracy compared with existing synthetic datasets.

6. Conclusions

Due to its convenience of acquiring ground truth labels at a large scale, data simulation is becoming a promising solution to the problem of lacking annotated data. Simulated data offers us a unique opportunity in content adaptation, i.e., editing image content to generate a training set useful for the target domain. This paper proposes a scalable solution towards complex scene synthesis to be utilized in training semantic segmentation models. Our contribution is two-fold. First, we introduce a new 3D scene generation engine - SceneX, which construct scenes based on global-level attributes, such as illumination and object density. Second, our solution explicitly considers attribute correlation, and its structure follows a discretization-and-relaxation strategy, making it uniquely suitable for the challenging scene generation problem at hand. We show that our optimized dataset is consistently superior to that generated by random attributes. With only 450 images, the optimized SceneX dataset is very close to the performance of GTA5 and SYNTHIA that have many thousands of realistic images. These results strongly support the idea of content adaptation. In future, we will collect more diverse and more realistic 3D models and dive deeper into this interesting area.

References

  • (1)
  • Bak et al. (2018) Slawomir Bak, Peter Carr, and Jean-Francois Lalonde. 2018. Domain adaptation through synthesis for unsupervised person re-identification. In

    Proceedings of the European Conference on Computer Vision

    .
  • Barbosa et al. (2018) Igor Barros Barbosa, Marco Cristani, Barbara Caputo, Aleksander Rognhaugen, and Theoharis Theoharis. 2018. Looking beyond appearances: Synthetic training data for deep cnns in re-identification. Computer Vision and Image Understanding 167 (2018), 50–62.
  • Brostow et al. (2009) Gabriel J Brostow, Julien Fauqueur, and Roberto Cipolla. 2009. Semantic object classes in video: A high-definition ground truth database. Pattern Recognition Letters 30, 2 (2009), 88–97.
  • Brostow et al. (2008) Gabriel J Brostow, Jamie Shotton, Julien Fauqueur, and Roberto Cipolla. 2008. Segmentation and recognition using structure from motion point clouds. In European conference on computer vision. Springer, 44–57.
  • Chen et al. (2017) Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40, 4 (2017), 834–848.
  • Chen et al. (2019) Yuhua Chen, Wen Li, Xiaoran Chen, and Luc Van Gool. 2019. Learning semantic segmentation from synthetic data: A geometrically guided input-output adaptation approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1841–1850.
  • Cordts et al. (2016) Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3213–3223.
  • Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 248–255.
  • Deng et al. (2018) Weijian Deng, Liang Zheng, Qixiang Ye, Guoliang Kang, Yi Yang, and Jianbin Jiao. 2018. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 994–1003.
  • Dosovitskiy et al. (2017) Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. 2017. CARLA: An open urban driving simulator. arXiv preprint arXiv:1711.03938 (2017).
  • Gaidon et al. (2016) Adrien Gaidon, Qiao Wang, Yohann Cabon, and Eleonora Vig. 2016. Virtual worlds as proxy for multi-object tracking analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4340–4349.
  • Geiger et al. (2013a) Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. 2013a. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research 32, 11 (2013), 1231–1237.
  • Geiger et al. (2013b) Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. 2013b. Vision meets Robotics: The KITTI Dataset. International Journal of Robotics Research (2013).
  • Geiger et al. (2012) Andreas Geiger, Philip Lenz, and Raquel Urtasun. 2012. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems. 2672–2680.
  • Gretton et al. (2012) Arthur Gretton, Karsten Borgwardt, Malte J Rasch, Bernhard Schoelkopf, and Alexander Smola. 2012. A Kernel Two-Sample Test.

    Journal of Machine Learning Research

    13 (2012), 723–773.
  • Hattori et al. (2015) Hironori Hattori, Vishnu Naresh Boddeti, Kris M Kitani, and Takeo Kanade. 2015. Learning scene-specific pedestrian detectors without real data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3819–3827.
  • Heusel et al. (2017) Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems.
  • Hoffman et al. (2017) Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A Efros, and Trevor Darrell. 2017. Cycada: Cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213 (2017).
  • Hoffman et al. (2016) Judy Hoffman, Dequan Wang, Fisher Yu, and Trevor Darrell. 2016. Fcns in the wild: Pixel-level adversarial and constraint-based adaptation. arXiv preprint arXiv:1612.02649 (2016).
  • Huang et al. (2018) Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. 2018. Multimodal Unsupervised Image-to-image Translation. In European Conference on Computer Vision.
  • Isola et al. (2017) Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017.

    Image-To-Image Translation With Conditional Adversarial Networks. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • Juliani et al. (2018) Arthur Juliani, Vincent-Pierre Berges, Esh Vckay, Yuan Gao, Hunter Henry, Marwan Mattar, and Danny Lange. 2018. Unity: A General Platform for Intelligent Agents. CoRR abs/1809.02627 (2018).
  • Kar et al. (2019) Amlan Kar, Aayush Prakash, Ming-Yu Liu, Eric Cameracci, Justin Yuan, Matt Rusiniak, David Acuna, Antonio Torralba, and Sanja Fidler. 2019. Meta-Sim: Learning to Generate Synthetic Datasets. In Proceedings of the IEEE International Conference on Computer Vision.
  • Kolve et al. (2017) Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. 2017. AI2-THOR: An Interactive 3D Environment for Visual AI. arXiv (2017).
  • Li et al. (2018) Peilun Li, Xiaodan Liang, Daoyuan Jia, and Eric P Xing. 2018. Semantic-aware grad-gan for virtual-to-real urban scene adaption. arXiv preprint arXiv:1801.01726 (2018).
  • Li et al. (2017) Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. 2017. Universal Style Transfer via Feature Transforms. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). 386–396.
  • Long et al. (2015) Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3431–3440.
  • Luo et al. (2019) Yawei Luo, Liang Zheng, Tao Guan, Junqing Yu, and Yi Yang. 2019. Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2507–2516.
  • Pepik et al. (2012) Bojan Pepik, Michael Stark, Peter Gehler, and Bernt Schiele. 2012. Teaching 3d geometry to deformable part models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • Prakash et al. (2019) Aayush Prakash, Shaad Boochoon, Mark Brophy, David Acuna, Eric Cameracci, Gavriel State, Omer Shapira, and Stan Birchfield. 2019. Structured domain randomization: Bridging the reality gap by context-aware synthetic data. In Proceedings of the IEEE International Conference on Robotics and Automation. 7249–7255.
  • Richter et al. (2016) Stephan R. Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. 2016. Playing for Data: Ground Truth from Computer Games. In European Conference on Computer Vision.
  • Ros et al. (2016) German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. 2016. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3234–3243.
  • Ruiz et al. (2018) Nataniel Ruiz, Samuel Schulter, and Manmohan Chandraker. 2018. Learning to simulate. arXiv preprint arXiv:1810.02513 (2018).
  • Sakaridis et al. (2018) Christos Sakaridis, Dengxin Dai, and Luc Van Gool. 2018. Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision 126, 9 (2018), 973–992.
  • Satkin et al. (2012) Scott Satkin, Jason Lin, and Martial Hebert. 2012. Data-driven scene understanding from 3D models. In European Conference on Computer Vision.
  • Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
  • Sun and Zheng (2019) Xiaoxiao Sun and Liang Zheng. 2019. Dissecting Person Re-identification from the Viewpoint of Viewpoint. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • Tobin et al. (2017) Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. 2017. Domain randomization for transferring deep neural networks from simulation to the real world. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems.
  • Tremblay et al. (2018) Jonathan Tremblay, Aayush Prakash, David Acuna, Mark Brophy, Varun Jampani, Cem Anil, Thang To, Eric Cameracci, Shaad Boochoon, and Stanley T. Birchfield. 2018. Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2018).
  • Tsai et al. (2018) Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kihyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. 2018. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7472–7481.
  • Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8, 3-4 (1992), 229–256.
  • Wright (2015) Stephen J Wright. 2015. Coordinate descent algorithms. Mathematical Programming 151, 1 (2015), 3–34.
  • Wu et al. (2018) Yi Wu, Yuxin Wu, Georgia Gkioxari, and Yuandong Tian. 2018. Building Generalizable Agents with a Realistic and Rich 3D Environment. ArXiv abs/1801.02209 (2018).
  • Yao et al. (2019) Yue Yao, Liang Zheng, Xiaodong Yang, Milind Naphade, and Tom Gedeon. 2019. Simulating Content Consistent Vehicle Datasets with Attribute Descent. arXiv preprint arXiv:1912.08855 (2019).
  • Zhang et al. (2018) Yiheng Zhang, Zhaofan Qiu, Ting Yao, Dong Liu, and Tao Mei. 2018. Fully convolutional adaptation networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6810–6818.
  • Zhao et al. (2017) Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. 2017. Pyramid Scene Parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • Zheng et al. (2015) Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang, and Qi Tian. 2015. Scalable person re-identification: A benchmark. In Proceedings of the IEEE International Conference on Computer Vision. 1116–1124.
  • Zhu et al. (2017) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision.
  • Zou et al. (2018) Yang Zou, Zhiding Yu, BVK Kumar, and Jinsong Wang. 2018. Domain adaptation for semantic segmentation via class-balanced self-training. arXiv preprint arXiv:1810.07911 (2018).

Appendix A Details of SceneX

a.1. 3D Scene Classes and Assets

To perform segmentation on a rich range of objects, we have collected a large number of 3D assets for the engine. SceneX contains 19 classes of virtual objects, i.e., car, pedestrian, building and etc, which is compatible with Cityscapes (Cordts et al., 2016). Specifically, SceneX contains 200 pedestrian models, 195 cars, 28 buses and 39 trucks from existing model repositories (Sun and Zheng, 2019; Yao et al., 2019), and makes necessary modifications so that they are compatible with our engine. Besides, we collect 106 buildings, 18 bicycles, and 19 trees, among others. There are also 14 sky box models to simulate different weather conditions. Some sample object models are shown in Fig. 10.

a.2. Engine design

As illustrated in Fig. 11, our system is mainly composed of Unity asset database, Unity rendering engine and a Python API. The Unity database contains 19 classes of 3D assets that have various appearances, and it is extendable. The Unity rendering engine defines the scene structure and is able to change the environment variables. The scene structure is featured by a “line-based” design. After the scene is constructed, the camera moves and captures the scene, outputting a set of synthetic images as well as corresponding ground truth segmentation labels, which are automatically generated through rendering buffer. The Python API ensures us to control the scene structure by modifying the global attributes within SceneX, through Python programming.

The “line-based” design enables us to control the scene structure with few parameters. Specifically, as shown in Fig. 11, to the center of a scene is a road map, around which are a group of lines. These lines are designed to place objects on. Same types of objects (e.g., bicycles) are placed along a line parallel to the road, and they share the distance with the road. Objects on the same line are tied together, which means changing the position of an object equals to moving all the objects placed by the same magnitude. This object placement strategy not only allows us to easily adjust the distance between objects and the road, but also enables precise object density changes by modifying the interval between objects on the same line.

Thus, the scene generation process can be viewed as a sequential process. Firstly, the positions of the lines are determined, and the types of objects to be placed on them are determined as well. Then, objects such as buildings, persons and cars are randomly picked from the Unity asset database, and placed onto their corresponding lines. After all the lines are filled with objects, the illumination changes and the camera moves to capture the scene. After that, the scene is destroyed and another scene is constructed. This scene generation process repeats several times, such that the generated dataset consists of various scenes.

Label acquisition. The advantage of data synthesis is that labels can be obtained freely. Given an image of the scene, pixel-level ground truths can be obtained through the rendering buffer.

a.3. Attribute design

There 23 global attributes within SceneX, and they are classified into three groups,

i.e., 8 for environment, 7 for object position (i.e., line position) and 8 for density. The details of these attributes are listed as follow.

Environment variables.

  • Illumination intensity, that changes the brightness of the virtual environment.

  • Illumination angle x, that changes the rotation angle along x axis for illumination.

  • Illumination angle y, that changes the rotation angle along y axis for illumination.

  • Camera position probability, that changes the probability of camera to be on left or right side on the road.

  • Camera position x, that changes the camera position along x axis.

  • Camera position y, that changes the camera position along y axis.

  • Camera rotation x, that changes the camera rotation angle along x axis.

  • Camera rotation y, that changes the camera rotation angle along y axis.

Position variables for context splines.

  • Building position, that changes the parallel distance between the building (and train) and the road.

  • Fence position, that changes the parallel distance between the fence (and wall) and the road.

  • Tree position, that changes the parallel distance between the tree and the road.

  • Bicycle position, that changes the parallel distance between the bicycle (and motorcycle) and the road.

  • Person position, that changes the parallel distance between the person (and rider) and the road.

  • Pole position, that changes the parallel distance between the pole (and traffic sign) and the road.

  • Car position, that changes the parallel distance between the car (and bus, truck) and the road.

Density variables for context splines.

  • Building probability, that changes the occurrence probability for building (and train).

  • Building interval, that changes the interval for buildings.

  • Fence interval, that changes the interval for fences (and walls).

  • Tree interval, that changes the interval for trees.

  • Bicycle interval, that changes the interval for bicycles (and motorcycles).

  • Person interval, that changes the interval for pedestrians (and riders).

  • Pole interval, that changes the interval for poles (and traffic signs).

  • Car interval, that changes the interval for cars (and buses and trucks).

Appendix B Details of attribute training

b.1. Cityscapes as target dataset

When using Cityscapes as the target dataset, we simulate synthetic images at resolution of at training stage, and at testing stage. We manually permutate the 23 global attributes into three groups, i.e., 7 for object locations, 8 for object densities and 8 for environments. Furthermore, we split the 8 object density variables into four groups, with two attributes in each group. Thus, the 23 attributes are split into six groups, and they are optimized using SDR in groups.

During attribute training, each group of attributes are optimized for up to 50 updates of the policy network. For each update, one dataset is generated with a size of 180 images, and thus an accuracy score is calculated to update the policy network.

b.2. CamVid as target dataset

When using CamVid as the target dataset, we simulate synthetic images at resolution of at both training and testing stages. The 8 object location variables are split into four groups, with two attributes in each group respectively. Thus, the 23 attributes in this case are split into six groups. And we simulate 180 images in each optimizing step like Cityscapes.

Figure 12. Sample images of existing synthetic datasets (a) GTA5 and (b) SYNTHIA and (c) simulated dataset using SDR within SceneX (SceneX+SDR) as well as the target real-world dataset (d) Cityscapes.
Figure 13. Sample images of (top) simulated images using SDR within SceneX (SceneX+SDR) as well as (bottom) the target dataset CamVid.