EXPO-HD: Exact Object Perception using High Distraction Synthetic Data

by   Roey Ron, et al.

We present a new labeled visual dataset intended for use in object detection and segmentation tasks. This dataset consists of 5,000 synthetic photorealistic images with their corresponding pixel-perfect segmentation ground truth. The goal is to create a photorealistic 3D representation of a specific object and utilize it within a simulated training data setting to achieve high accuracy on manually gathered and annotated real-world data. Expo Markers were chosen for this task, fitting our requirements of an exact object due to the exact texture, size and 3D shape. An additional advantage is the availability of this object in offices around the world for easy testing and validation of our results. We generate the data using a domain randomization technique that also simulates other photorealistic objects in the scene, known as distraction objects. These objects provide visual complexity, occlusions, and lighting challenges to help our model gain robustness in training. We are also releasing our manually-labeled real-image test dataset. This white-paper provides strong evidence that photorealistic simulated data can be used in practical real world applications as a more scalable and flexible solution than manually-captured data. https://github.com/DataGenResearchTeam/expo_markers



There are no comments yet.


page 1

page 3

page 4

page 11

page 12


PennSyn2Real: Training Object Recognition Models without Human Labeling

Scalability is a critical problem in generating training images for deep...

SIDOD: A Synthetic Image Dataset for 3D Object Pose Recognition with Distractors

We present a new, publicly-available image dataset generated by the NVID...

GeneSIS-RT: Generating Synthetic Images for training Secondary Real-world Tasks

We propose a novel approach for generating high-quality, synthetic data ...

RandomRooms: Unsupervised Pre-training from Synthetic Shapes and Randomized Layouts for 3D Object Detection

3D point cloud understanding has made great progress in recent years. Ho...

Augmented Reality Meets Computer Vision : Efficient Data Generation for Urban Driving Scenes

The success of deep learning in computer vision is based on availability...

Scalable Certified Segmentation via Randomized Smoothing

We present a new certification method for image and point cloud segmenta...

Strawberry Detection using Mixed Training on Simulated and Real Data

This paper demonstrates how simulated images can be useful for object de...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Nowadays, supervised deep learning algorithms, more specifically Convolutional Neural Networks, outperform classical machine learning and computer vision algorithms on all standard tasks. These algorithms reach state-of-the-art performance, but at the same time require large amounts of labeled data to provide a robust solution for real world scenarios 

[8]. Data collection and labeling is an expensive, time-consuming process, and the quality of the data varies between providers, especially in pixel-wise tasks such as segmentation. In some tasks, it’s almost impossible for humans to extract the labels (such as depth maps or surface normal maps). Sun et al. [10] discuss the success of deep learning as a function of dataset size and propose that lack of labeled data will remain a bottleneck for improving results. They point out that, while GPU computation power and model sizes have continued to increase over the last years, the size of the largest training datasets has surprisingly remained constant. They show that the performance on vision tasks increases logarithmically based on volume of training data, and that higher capacity models are better at efficiently using larger datasets.

For many computer vision applications, including autonomous driving, security cameras, smart store cameras and interactive robotics, the end-user expects high-quality results that are robust and reliable. For these applications, using the most proven and highest quality supervised machine learning algorithms after training them with large amounts of data is the go-to solution. This is in stark contrast to Few-shot Learning, Weakly Supervised Machine Learning, Unsupervised Machine Learning or Deep Feature Extraction, which are popular topics in the academia but are far from ready for real-world applications.

(a) Sample from the synthetic image dataset
(b) The sample from the synthetic image dataset with labels
(c) Sample from the real image dataset
(d) The sample from the real image dataset with labels
Figure 2: Samples from the EXPO-HD Dataset

1.1 Synthetic Datasets

The promise of synthetic data has been clear since its inception; datasets generated through computational algorithms that can mimic semantic and visual patterns found in the real world. This data could train machine learning algorithms without compromising privacy (e.g. facial data) or being susceptible to high-level biases in the data (e.g. generation of equal amounts of males and females in the dataset). Synthetic data would also be highly scalable—more data could always be generated–and edgecases–gaps in the data where real data would be hard or unreasonable to collect–could be generated. In the past years, synthetic data has shown promise across a range of verticals, from medical research, where patient privacy is high priority, to fraud detection, where synthetic datasets can be used to test and increase the robustness of security solutions. In recent years, synthetic data generation has gained substantial popularity within the computer vision field [9, 4, 5] as a solution to the data bottleneck problem. There are two main approaches that help to bridge the gap between the source domain (synthetic data) and the target domain (real data)—domain randomization and domain adaptation.

1.2 Domain Randomization

The core idea behind Domain Randomization (DR) is to train a neural network to perform well on such a broad synthetic source domain that the model will generalize to the real-world target domain data. This method is relevant when the exact target domain is unknown, highly variant or hard to mimic. The scene is randomized in non-realistic ways to force the neural network to learn in two ways. First, it will learn the robust features of the scenes that are invariant in all randomizations. For example, if we randomize the lighting in the scene but keep the geometric structures constant, the network will learn to recognize the 3D structures and become more robust in unforeseen lighting situations. Second, the random sub-domains generated by the DR that are near to the target domain will teach the neural to analyze the target domain, without explicitly recreating it. For instance,  [12, 11]

apply DR techniques both to the scene structure (placing the objects) and to the textures. They use general basic objects (e.g. cube or pyramid without predefined texture) to create the variance they needed. In our case, we use photorealistic objects.

1.3 Domain Adaptation

Several works  [3, 6] deal with the task of transferring images from a source visual domain to another separate target visual domain in order to attempt to mimic the effectiveness of standard, collected and annotated visual data, captured from the target domain. By closing the domain gap, the synthetic data can theoretically act as if it was captured from the target domain. Domain Adaptation can also be applied by adjusting the data to a specific camera hardware, by using small amount of unlabeled images captured with the target camera. In our work, we use photorealistic images. By doing so, we minimize the visual domain gap. In future works we will explore the use of domain adaptation to further improve our results.

2 Contributions

2.1 Synthetic Dataset

The synthetic dataset consists of 5,000 synthetic photorealistic images with their corresponding pixel-perfect segmentation ground truth. Image resolution is 600x600 and each image contains, on average, 11 marker objects and 50 distraction objects.

2.2 Real Dataset

In order to evaluate the performance of our synthetic dataset, we trained a CNN exclusively with our synthetic dataset and tested it on a manually labeled real image dataset. The real image dataset consists of 200 images, and was taken with a Samsung s10+ camera. Some of the images are quite simple, while others are more complicated and include occlusions and objects which are similar to the markers (e.g., pens, pencils, and other brands of markers). Table 3 presents the instance distribution of the real image dataset.

Marker Red Green Blue Black total
#instances 203 202 202 205 812
Figure 3: Real Test Set Distribution
Figure 4: A sample from the the real image dataset

2.3 Code

We share our code which enables ’plug and play’ inference and training, using Mask R-CNN [2] on our dataset. Our code is based on detectron2 [13], which is fast, flexible and enables the use of various architectures (see detectron2’s ‘model zoo’: detectron2/MODEL_ZOO.md).

3 Method

3.1 Data Generation

First, we created a photorealistic 3D representation of our target class - the Expo markers. These representations were created by 3D artists using accurate modeling.

In addition, a set of photorealistic items were used as distraction for the algorithm. These items were queried at random from the indoor environment section of the DataGen Asset Library, a library of hundreds of thousands of photorealistic items.

The targets were placed in the 3D scene, visible to the simulated camera lens. The distraction items were placed within the same 3D scene in order to create a challenging visual setting. The number of objects, backgrounds, scene lighting, occlusions and randomness of orientation do not attempt to resemble the distributions in a real-world scene, and instead try to provide the algorithm a more challenging dataset to train on. The main goal is to enable the algorithm to learn a robust representation that could face extreme cases present in the real world.

Each image was rendered with Cycles rendering [1] and a pixel-perfect segmentation map was created. The output of this method is photorealistic images with pixel-perfect annotations.

3.2 Testing (train Mask R-CNN on our dataset)

Detectron2 [13] was used as our training platform. It is Facebook AI Research’s next generation software system that implements state-of-the-art object detection algorithms. https://github.com/facebookresearch/detectron2

3.3 Performance Metrics

mAP (Mean Average Precision) is used as our main performance metric. For convenience, we define some notation that we will use to discuss our results:

  • - mAP at Intersection over Union (IoU)=.50:.05:.95 (COCO primary metric).

  • - mAP at IoU=.50 (PASCAL VOC metric).

  • - mAP at IoU=.75 (strict metric).

  • - (Mean Average Recall) given 100 detections per image

4 Results

4.1 Main Results

We achieved , , and on the real image test set, trained exclusively on our synthetic data, using a dataset size of 4096 synthetic images. We would like to mention that these results were achieved without any special manipulations, using detectron2’s default training routine. 111 Here are our visual results.

4.2 Mask R-CNN on Different Datasets

In order to have some reference points to our results and to have some context and sense of the Mask R-CNN capabilities we show results of MASK R-CNN with similar backbones trained on different datasets. Mask R-CNN [2] was originally trained on a subset of the COCO dataset [7] (which contains 80 classes) using the union of 80k training images and a 35k subset of validation images.

Train Set Test Set Backbone AP AP AP
COCO COCO 50-FPN 33.6 55.2 35.3
COCO COCO 101-FPN 35.4 57.3 37.5
Meta-Sim KITTI 50-FPN N/A 77.5 N/A
Ours Ours 101-FPN 79.2 97.5 95.2
Table 1: Performance of Mask R-CNN on different datasets. 50-FPN and 101-FPN stands for ResNet-50-FPN and ResNet-101-FPN respectively

In Table 1, we can see that using ResNet-101-FPN trained on COCO, Mask R-CNN achieves while we managed to achieve . That said, this comparison is asymmetrical for two main reasons: First, our dataset contains 4 classes while there are 80 classes in COCO. Second, the notion of class is quite different, the classes in COCO are broader. For instance, there is a ‘keyboard’ class in COCO which includes many types of different keyboards, while in our dataset each class contains exactly one specific object.

Kar et al. [5] generated synthetic dataset and trained Mask R-CNN with Resnet-50-FPN as their backbone. Their target domain was the KITTI dataset and they applied domain adaptation. They achieved on the KITTI dataset, with the easiest setup.

Comparison of this data and methodology will continue. This initial step allows us to show promising results that are relevant for real-world applications while only training on the synthetic data domain.

4.3 Performance as a Function of Dataset Size

In order to understand how the dataset size affects the performance, we trained Mask R-CNN on the following dataset sizes: 64, 96, 128, 192, 256, 384, 512, 1024, 2048 and 4096. For each dataset, we used COCO’s initial weights, mini batch SGD as optimization algorithm with mini batch size of 4. The initial learning rate of 0.003 was reduced twice: at 60k and 80k iterations, by a factor of 0.1. The training ended at the 90k iteration. Since our initial weights (COCO) were trained on real data, and our training data is purely synthetic, it is interesting to monitor if and where over-fitting to synthetic data arises during the training process. Therefore, we used two validation sets: a synthetic set with 200 images, and a real dataset with 40 images. For each training session, we chose the best weights by evaluating the network’s mAP on our real image validation set, every 500 iterations. Figure 5 presents the mAP as function of dataset size, where for each dataset size, the weights where selected as explained above. Figure 6 shows the evaluation results for each training session on the real image validation set. Figure 7 shows the evaluation results of training with the largest dataset size (4096) both on the real and the synthetic image validation sets. Figure 8 shows the evaluation results for all training sessions, both on the real and synthetic image validation sets.

Figure 5: mAP as function of dataset size
Figure 6: mAP when trained with different dataset sizes, evaluated on the real test set.
Figure 7: mAP when trained on dataset size of 4096, evaluated on the synthetic and the real test sets

5 Discussion

5.1 Real vs. Synthetic Object Placement

The real world distribution of the number of items, the occlusions, the size of the objects and their orientations per image is an unknown. The target domain for our dataset is an office setting, but each office in the world is unique. Ideally, we’d want to generate data with office space priors at scale. For this specific task the semantic placement of office space objects at scale was unreasonable.

To deal with the lack of valid priors, the placement and rotation of the objects were completely randomized. This ensured that no spatial prior was learned by the network due to a bias of the simulated data. Instead of aiming for the common case found in office settings, this data simulates the most challenging visual cases (including scale variation, occlusions, lighting variation, high object density and similar object classes) one could expect in the real-world. Through experiments such as this, we see that generating photorealistic simulated data in extremely visually challenging settings is the key for training robust algorithms that work even in edge cases.

5.2 Dataset Size Affect on Performance

Figures 5 and 6 show that performance increases as a function of dataset size. The result fits well the claims made by  [10], in which, larger datasets result in better performance. In 5 we see the convergence of the quality of the network between dataset sizes of 2k and 4k training data points. This highlights a few possibilities which require further testing. Possibility 1: we need substantially more data to push forward the values. Possibility 2: this simulated data generation requires additional variation such as camera noise and blur to further generalize to the real-world. Possibility 3: the neural network architecture used has reached its qualitative limit providing the best results it can given the data at hand.

Additionally, we can deduce the two following insights from Figure 7: First, the best results on the real data are achieved quite early on in the training process - at less than 10k iterations. Second, we can see that later on in the training process, we observe a minor decrease in performance on the real data validation set, while performance on the synthetic data validation set continues to grow. This could be related to over fitting of the network to the synthetic training data distribution.

5.3 Limitations

This approach may experience difficulties if our examples from our target class, in our case the Expo Markers, change their visual appearance. For example, if they are broken, become dirty, or have their caps removed. For this reason, the main challenge of this method is to anticipate edge cases for our target class. Due to our limited ability to define all edge cases, we recommend always taking a healthy iterative approach, solving the base problem and later moving to additional more complex scenarios and cases.

Another challenge we see is reliably modeling all of the variation in the scene. In this data set, variations in lighting, 3D spatial variations, background variations, number of objects and object classes inserted into the scene were strongly varied. Additional variations to consider for further testing are lighting color, directional emissive object lighting, motion blur and camera noise.

all models are wrong, but some are useful

— George E. P. Box

This is a key takeaway. It is impossible to perfectly simulate the real world, but it is very possible to create useful simulations to train computer vision neural networks.

6 Conclusion

By creating photorealistic synthetic data in 3D environments and successfully training neural networks with it, we gain confidence that using photorealism in addition to variance, is useful for real world applications. This is the case even when training exclusively on synthetic data.

We believe that our approach can be implemented and generalized to a wide range of items and have an impact in use-cases such as production and assembly lines in smart factories, standard items on shelves of smart stores, food products in smart refrigerators and visual understanding for robots and drones.

7 Future Work

7.1 Comparison to a Manually Gathered Training Set

Our next significant step in order to profoundly test the effectiveness of synthetic data is to manually collect a significant amount of images of our target objects (Expo markers) in varied scenarios and manually label them with instance segmentation labels. Then we will train the Mask R-CNN on that manually gathered data, in addition to our Mask R-CNN that was already trained on synthetic data. This will enable us to perform a valid comparison between synthetic data and manually gathered data, by using the same test set for the two trained networks.

7.2 Additional Testing

Additional tests will be carried out in future versions of the same paper:

  • Testing larger datasets is something that we see as important to provide stronger guidelines on the affect of dataset size on trained network quality.

  • Testing additional types of variations in the data generation pipeline holds promise. Every variation added thus far has improved robustness of the model.

  • Testing various neural network backbones on the different size datasets will show if larger networks are required to utilize the information generated by the synthetic data.

  • Testing the result of training a mix of real and synthetic data to see the effects of adding into a large synthetic dataset a few manually gathered and annotated images.

7.3 Additional Modalities

Using synthetic data allows us to generate many kinds of labels, and opens a new door to solve tasks that weren’t possible for the reason of lacking data. Label types can be divided as follows:

  • 2D - segmentation, sub-segmentation, bounding boxes and key-points.

  • 3D - depth map, normal map, 3d key-points and specific sensors can be modeled.

  • 4D - optical flow.

In our next release, we plan to add more types of labels to our images such as key-points, depth map and surface normal map.

7.4 Domain Adaptation

We plan to use noise transfer techniques that attempt to simulate the noise of a target domain on top of our simulated image dataset, in order to minimize the domain gap without effecting the content of the image.


  • [1]

    Cycles Open Source Production Rendering

    Cited by: §3.1.
  • [2] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §2.3, §4.2.
  • [3] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell (2017) Cycada: cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213. Cited by: §1.3.
  • [4] M. Jalal, J. Spjut, B. Boudaoud, and M. Betke (2019) Sidod: a synthetic image dataset for 3d object pose recognition with distractors. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    pp. 0–0. Cited by: §1.1.
  • [5] A. Kar, A. Prakash, M. Liu, E. Cameracci, J. Yuan, M. Rusiniak, D. Acuna, A. Torralba, and S. Fidler (2019) Meta-sim: learning to generate synthetic datasets. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4551–4560. Cited by: §1.1, §4.2.
  • [6] P. Li, X. Liang, D. Jia, and E. P. Xing (2018) Semantic-aware grad-gan for virtual-to-real urban scene adaption. arXiv preprint arXiv:1801.01726. Cited by: §1.3.
  • [7] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §4.2.
  • [8] N. O’Mahony, S. Campbell, A. Carvalho, S. Harapanahalli, G. V. Hernandez, L. Krpalkova, D. Riordan, and J. Walsh (2019) Deep learning vs. traditional computer vision. In Science and Information Conference, pp. 128–144. Cited by: §1.
  • [9] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez (2016) The synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3234–3243. Cited by: §1.1.
  • [10] C. Sun, A. Shrivastava, S. Singh, and A. Gupta (2017) Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pp. 843–852. Cited by: §1, §5.2.
  • [11] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel (2017) Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 23–30. Cited by: §1.2.
  • [12] J. Tremblay, A. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield (2018) Training deep networks with synthetic data: bridging the reality gap by domain randomization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 969–977. Cited by: §1.2.
  • [13] Y. Wu, A. Kirillov, F. Massa, W. Lo, and R. Girshick (2019) Detectron2. Note: https://github.com/facebookresearch/detectron2 Cited by: §2.3, §3.2.

Appendix A Additional Figures

Figure 8: mAP for different dataset sizes, evaluated on both the synthetic and real validation sets
Figure 9: Samples from the Synthetic Dataset
Figure 10: Samples from the Real Dataset