Intelligent robots depend on reliable semantic segmentation of objects to support effective physical interactions. While convolutional neural networks (CNNs) have significantly improved the performance of segmentation solutions, two challenges remain. First, the pixel-level classification task of segmentation requires models that can represent more complex distributions than those of object detection with bounding box regression and image classification. Second, the cost of annotating pixel-level segmentation data is prohibitive at the scale needed to train CNNs. Furthermore, many state-of-the-art semantic segmentation approaches do not readily extend to robot vision, as models with millions of parameters requiring billions of operations to classify each image remain impractical. Robot vision requires efficient CNN segmentation architectures that can be successfully trained on uncurated datasets acquired from robots interacting with the world, and segment inherently noisy perception data in real-time.
To this end, we present a novel perspective for developing robust semantic segmentation systems for robot perception. Specifically, our approach of transferring representations pretrained on synthetic segmentation data to real-time perception systems strictly improves performance by reducing bias in training. Synthetic segmentation datasets have the advantages of being scalable to millions of examples, giving perfect ground truth labels without extra annotation effort, and having less dataset bias.
Recent work has demonstrated that for large, non-real-time segmentation architectures, models pretrained with synthetic datasets can out-perform models pretrained on ImageNet . For our task aimed at robot vision, we explore whether this result holds for real-time segmentation architectures with less representational capacity by pretraining a real-time model with synthetic data  and comparing its fine-tuned performance with a model pretrained on ImageNet data. Our results to this ablation experiment show that for real-time segmentation architectures, synthetic data pretraining yields better performance than ImageNet, and that these performance improvements are greater than the performance gains in larger architectures .
To further validate our hypothesis, we investigate how models that are pretrained with synthetic data handle noise and scale in robot datasets. Typically datasets acquired from robots have sparse supervision, making it difficult to train a semantic segmentation model that accommodates the increased noise and bias in both the inputs and the labels. We examine the performance of synthetic data pretrained models (in comparison to ImageNet pretrained models) by fine-tuning on various subsets of a robot navigation dataset  to quantify the benefits as the amount of supervised fine-tuning data is decreased. Our results show the synthetic data pretrained models outperform the models with ImageNet pretraining for every amount of fine-tuning data, and that the performance improvement increases as the number of robot data training samples decreases.
Lastly, we consider whether the improvements in performance due to synthetic data pretraining is caused strictly by similarity of the high-level scenarios of the synthetic pretraining data and the target data. To test this, we consider the two standard datasets from the ablation experiment as our target data, where the high-level scenarios are “driving” and “indoor navigation” respectively. For both of these target datasets sets we train two models, one pretrained on a dataset designed for a similar high-level scenario and the other pretrained on data from a different high-level scenario. Our results show that models pretrained using a synthetic dataset, that is similar to the target task, display improvements on the task. However, we also show that there is a greater benefit for the target robot vision task in pretraining on a synthetic dataset that has more input diversity, more coverage, and less bias, regardless of the high-level similarity.
Our primary contributions are: (1) Extending the benefits of transferring features from synthetic data pretraining instead of ImageNet pretraining to real-time-optimized segmentation CNNs, (2) Demonstrating that transferring features from synthetic segmentation data helps reduce the amount of target robot data needed for strong real-time segmentation performance, and (3) Exploring the effect of “high-level domain similarity” between datasets, and showing that while synthetic data that has a similar high-level domain does give some improvement to performance, other properties of data, such as scale, diversity, and bias, have a greater effect on performance.
2 Related Work
2.1 Semantic Segmentation in Computer Vision
Recent work on semantic segmentation using CNNs has greatly advanced the subfield, but with limited focus on robot vision. Most approaches often use some combination of large convolutions, a strictly serialized layer setup , networks with over one hundred layers, or fully connected layers at the end of the network . As a result, these methods have hundreds of millions of parameters, are slow to train, and evaluate far outside of real-time for semantic segmentation. Thus, models that can do segmentation in real-time, as is required for robotics, are sparse in the literature and while there has been some effort for real-time general vision architectures , they often perform poorly at segmentation. E-Net  is the first effort to take advantage of all of these techniques, and the proposed architecture is specifically designed for real-time segmentation. None of these works, however, has examined the efficacy of the above architectures when trained on small, noisy robot vision datasets.
2.2 Segmentation Datasets
Segmentation datasets have become more relevant and abundant recently as the computer vision community has become more interested in segmentation, but they rarely apply to robotics directly and are not large enough to train models without pretraining. Popular segmentation datasets that are relevant to robotics include datasets for autonomous driving, and indoor datasets . Recently, the Robot@Home dataset  was published, which provides over 30K instance-labeled frames from over 80 sequences of a real robot navigating in six unique indoor environments. This is a vast improvement over previously available datasets for robot vision research, however all these datasets are still too small to be useful without pretraining and do not represent the complexities of robot vision well.
Synthetic segmentation datasets have become popular recently, but are still underutilized and do not randomize the simulation conditions to increase diversity and remove bias. For autonomous driving, efforts including Shafaei , Virtual KITTI , “Driving in the Matrix” , and the GTA dataset  all use high-fidelity simulations or video games to efficiently build realistic datasets, but all have less than 50K frames, which is many times too small to train robust autonomous vehicle perception systems. The SYNTHIA dataset  contains 200K frames captured across eight cameras that form a 360-degree array, but this leaves only 25K examples to train systems with forward-facing cameras. For indoor environments, Song  and Qui  created datasets of over 2 million frames from thousands of rooms, but still are subject to dataset bias by being highly ordered. Taking this idea even farther, McCormac created a comprehensive indoor dataset called SceneNet RGB-D  which generates and renders 5 million rendered RGB-D frames sampled from video through 3D scenes with randomized object compositions, textures, lighting, and camera trajectories. However, despite the availability of this large scale simulated data, computer vision researchers and roboticists alike continue to use ImageNet for pretraining since little evidence exists to properly explain the benefits of using simulated data.
2.3 Real-time Segmentation in Robot Vision
There are some recent efforts catering to real-time semantic segmentation of objects for robotic systems, especially with regards to autonomous driving  and grasping ; however the majority of these do not strictly enforce real-time requirements, nor do they utilize synthetic data. James  examine the effect of changing the amount of synthetic pretraining data for their grasping task, but the smallest amount of data they consider is 100K images, they do not operate in natural environments, nor are they concerned with real-time performance. Madaan  and Lin  utilize synthetic data to train a custom real-time CNN for segmentation on a robot. However, each of these works focus on simply segmenting a binary mask. Our work segments entire scenes into many objects, motivated by scenarios of more complex robots like home robots and autonomous cars. Most closely related to our work, McCormac improves semantic segmentation for the task of depth-based simultaneous localization and mapping for robots  using synthetic image data. McCormac  go on to demonstrate that the large U-Net  architecture pretrained on SceneNet RGB-D outperforms the same architecture pretrained on ImageNet. Our work differs from prior efforts in that it examines whether the use of synthetic data improves a small, real-time architecture, analyzes the ways in which using synthetic data affects performance as we vary the amount of target task data, and examine how high-level similarity between pretraining and target data effects performance.
Closely related to this work is recent work on domain adaptation, including in semantic segmentation. In different task areas, the domain randomization work of  illustrates how randomization helps with transfer and adaptation for robot learning from vision. Mayer  expands further on this notion in learning optical flow, observing that accuracy is not necessarily beneficial to domain adaptation from simulation. In segmentation, the aforementioned SYNTHIA  dataset uses diverse simulation data as a means of on domain adaptation between different weather conditions in autonomous driving. Hoffman  and Zhang  take a slightly different tack, exploring methods of building domain adaptation capabilities into the learner itself, instead of depending on diverse training data. While these works are related, our work focuses specifically on the challenge of adaptation of small networks that lack representational capacity to distinguish signal from dataset bias, and how carefully selecting pre-training data can alleviate this issue and provide increasing degrees of improvement as fine-tuning data is reduced.
3 Transfer Learning with Synthetic Data as Bias Reduction
Dataset bias is an often overlooked component of training data-driven computer vision models. Especially in the context of transfer learning, the assumptions made about which distributions are “similar” are often naive, and as a result leave performance gains unrealized. We first examine the distributions modeled to mathematically justify the use of a synthetic dataset over ImageNet data.
3.1 Transfer learning and pretraining
The most effective and commonly utilized method for augmenting performance of models using small datasets is transfer learning , where one exploits the similarity in two distributions, and , by using parameters optimized to represent the initial distribution to provide an improved starting point for learning the target distribution .
Transfer learning is most often executed using pretraining, where a model is first trained on one task with a similar input domain to the target task, and then that model is used to initialize the network parameters. There are two typical ways of using the pretrained model: either the target task data is used to continue training the entire model over the transfered parameters, or the pretrained parameters are “frozen”, except for the inference parameters which are reinitialized randomly, and the target task data is used to optimize only the inference layers.
In supervised learning, the CNN and its parameterscan be thought of as forming an approximate representation of the task likelihood distribution
. By Bayes Theorem, we know that
where the posterior distribution in this case is the generative task of constructing images given a label as input.
In transfer learning, the assumption is that, for two supervised problems where the input is sampled from all natural images, the divergence between task likelihood distributions varies as the divergence between the input distributions:
This assumption is justified by empirical successes improving a wide range of computer vision problems by first pretraining on ImageNet . These assumptions, however, do not account for dataset bias nor the difference between and , or , and as such leave room for improvement.
3.2 The transfer learning gap
The Torralba  study on dataset bias shows that even the most carefully constructed image datasets contain significant and distinct enough bias that a simple linear discriminative model can distinguish between them based on their inputs alone.  concludes that dataset bias, specifically input bias, comes in three main forms: selection bias (images selected manually inherently have more bias than those obtained randomly or automatically), capture bias (image content is curated, e.g. photographs taken by people, objects are most often photographed from specific angles), and negative set bias, (datasets only collect items of interest, yielding models that do not sufficiently represent negative cases)111It is worth noting that negative set bias is less of a concern in segmentation because pixels are conditionally dependent on their neighborhood, and in general the more classes something is trying to predict the more negative examples that class has.. Additionally, one bias that Torralba do not mention is annotation bias, or the bias from errors in the human annotation; this is especially important in segmentation due to the large effect annotation errors near object boundaries have on prediction. Standard computer vision datasets like ImageNet have inputs that are carefully selected, and sampled from biased Web images that were captured and selected by the humans that took them, and have humans annotating them.
Robot vision datasets, on the other hand, are mostly uncurated, noisy, and have human annotators who bias segmentation annotations differently than the annotations of an image classification the task like ImageNet. We can assume, therefore, that in transferring features from a dataset like ImageNet there exist some difference in the biases of the inputs and annotations that cannot be closed trivially. These three types of input bias, annotation bias, and comprise the transfer learning gap of two datasets, which we will call .
Synthetic image datasets created with the same task as the target task can reduce for the process of transfer learning. With camera angles and lighting generated randomly, these datasets have virtually no selection and capture bias, except for the capture bias in the construction and arrangement of the simulated scene. This can be further reduced by introducing some inherent randomness in its initialization. Simulation also has the benefit of generating perfect annotations for free with objectively no bias. Intuitively one might guess that transferring from the same task domain will improve the effectiveness of pretraining. We can justify this intuition by inspecting the decomposition of Equation 1; while the distribution is difficult to observe, we can see that by rewriting Equation 1 for a known image , the target task distribution is proportional to the product of its label distribution and its generative posterior.
As a result, it can be stated that by attempting to match the target task as closely as possible in simulation, we can reduce .
3.3 Improving small models with simulated data
Given that synthetic datasets show the ability to reduce these four biases, plus , we can confidently predict that . However, simulations come with their own additional bias, referred to by the research community as the “Sim2Real” or the reality gap , . Notably, no matter their fidelity and realism, simulations will always struggle to properly model sensor noise, imperfections, and complex physical phenomena. This bias manifests as a lack of what Zhou call “coverage” in the data, defined as “quasi-exhaustive representation of the classes and variety of exemplars” . Good coverage in synthetic datasets is generally achieved by having a wide range of random camera angles, lighting, textures, object arrangements, and additional noise . We hypothesize that as long as the dataset compensates for the reality gap by giving sufficient coverage of the input distribution, we find that,
As a result CNN pretraining with synthetic data will still be beneficial.
The shortcomings of transfer learning affect all CNNs, but not equally. During pretraining, if bias exists in the pretraining dataset, large models with many parameters have the capacity to model both the underlying distribution and the bias. Small models, on the other hand, do not have the capacity to model both, and over time will tend toward to learning the bias because it gives the lowest loss. As a result, large models will transfer the underlying distribution during fine-tuning and be able to ignore pretraining bias, while small models will have to unlearn the effects of the pretraining bias. Therefore, we hypothesize that for transfer learning, pretraining on data that has less bias with respect to the target data will show greater performance improvement with small models. Comparing our results to that of McCormac  validates this hypothesis.
We conducted three experiments to validate that Equation 4 holds for small CNNs trained with sparsely supervised robot perception data: (1) An ablation experiment to demonstrate that using synthetic data for pretraining (compared to ImageNet) improves real-time models on standard semantic segmentation datasets, (2) a data withholding experiment to compare the models pretrained with synthetic data to the models pretrained with ImageNet by measuring their performance on a held out set of robot perception data after being fine-tuned using increasingly restricted amounts of robot perception data, and (3) a high-level similarity experiment to demonstrate that the high-level task similarity between pretraining and fine-tuning datasets has only a minor effect on model performance compared to the effects of bias reduction discussed in Section 3.
In this section, we outline how we selected a synthetic dataset that met the requirements outlined in Section 3.2, standard datasets relevant to robotics, preferably one for autonomous driving and one for an indoor scenario as those are two domains in which robots can benefit from segmentation, a robot dataset for the second experiment, and a semantic segmentation CNN architecture that could run in real-time on a robot.
4.1 Dataset Selection
For our standard segmentation datasets, we use the SUN RGB-D  indoor dataset and the Cityscapes  autonomous driving dataset. We selected SUN RGB-D because it has 37 challenging semantic classes and it is one of the largest real semantic segmentation datasets for indoor environments. We selected Cityscapes as it is the most recent real autonomous driving dataset, has 19 semantic classes which is more than most driving datasets, and 5K frames with “fine” annotations from a series of driving videos.
For our primary synthetic pretraining dataset, we used SceneNet RGB-D . SceneNet RGB-D has over 5 million photorealistic training images sampled from randomized smooth trajectories through 16K room configurations. Each unique room configuration has a random set of contextually-relevant objects initialized with both random pose and texture, and random lighting, and all frames are intentionally perturbed with realistic noise. Moreover, the dataset is labeled instance-wise, so we opted to mapping the objects to 13 classes, which made it more adaptable as a pretraining model for transfer learning and consistent with the experiments of McCormac . The diversity of the data in this dataset makes it especially well suited for large-dataset pretraining.
Lastly, for our robot data we use the Robot@Home dataset . Created with the intention of semantic mapping by a household robot, Robot@Home was collected over four years by recording 81 video sequences of a robot driving around 36 unique unstructured human spaces. The robot was equipped with four Primesense RGB-D cameras for recording the visual frames and a 2D laser scanner to improve mapping capabilities. Objects instance segmentation labels are provided for 32937 frames across 72 sequences, which like SceneNet RGB-D, has mappings to standard object segmentation class labels. We chose to use the mapping to the original 41 SUN categories to increase the difficulty of this last experiment.
4.2 Model Architecture
The architecture best suited to our task is the E-Net  architecture. E-Net is an encoder-decoder style CNN composed mainly of “bottleneck” modules, which combine three convolutions and a skip layer. The E-Net design combines an initial network-in-network module with quick downsampling to strike a compromise between reducing the number of parameters in the network and the representational power of low-level features from layers close to the original image resolution. The remainder of the network is 16 bottleneck modules for the encoder, and 5 such modules for the decoder. Unlike most segmentation networks, E-Net has an asymmetric encoder-decoder design as the encoder is the main feature extractor and the decoders are often responsible for using large quantities of parameters. Our implementation of E-Net runs very efficiently: over 56 frames per second on average for a 256x256 input on a single NVIDIA GTX 1080Ti GPU.
5 Experiments and Results
We compare models trained from scratch in two different ways. In the first case, a model is pretrained on an image classification task using the ImageNet dataset. As ImageNet is an image classification dataset it is only used to pretrain the encoder — standard practice for transfer learning from classification to segmentation. The model is then fine tuned on the target dataset with randomly initialized output layers, which in the case of segmentation is the “decoder”. In the second case, a model is pretrained end-to-end on a semantic segmentation task using the SceneNet RGB-D dataset, and then the entire model is fine tuned with the target dataset.
This table shows the performance difference in the two pretrained models given different amounts of fine-tuning data.
|Percentage of Full Robot Fine-tuning Training Set (number of examples)|
Robot@Home mIoU Dataset Variance Evaluation
5.1 Implementation Details
We implemented E-Net using the PyTorch  framework 222All code and models can be found on our github here. The network is trained using negative log linear loss on a Softmax function and optimized using the adaptive gradient descent algorithm, Adam . A brief hyper-parameter search was conducted on the initial learning rate and we found that the initial learning rate of was good for both training from scratch and fine tuning. For the other hyper-parameters, we used the values suggested by Kingma . We also experimented with mini-batch sizes for training, and found that the results were fairly similar, with 128 converging the most efficiently for the pretraining datasets. The real, non-ImageNet datasets were trained with a batch size of 32. For weight initialization we randomly sample from a Gaussian distribution. All images were scaled to a resolution of 256x256 for our experiments. Training was performed on NVIDIA K40 Quadro, NVIDIA TITAN X, and NVIDIA GTX 1080Ti GPUs.
In each experiment, the final dataset was evaluated on three metrics standard to semantic segmentation. These are pixel accuracy, mean accuracy, and the mean Intersection over Union (mIoU) measure. For total classes, and for some predicted class , are the number of pixels in class that are predicted to be in class . The pixel accuracy measures the ratio of pixels predicted correctly to all labeled pixels; this is a good indicator of how well the segmentation did relative to random chance. The mean accuracy measures the average across classes of the ratios of pixels predicted correctly to the total number of pixels in a label class. This measures the accuracy of the assignment over all classes. Lastly, mean IoU measures the average across classes of the the ratios of pixels predicted correctly to the total number of pixels in a label class plus the number of pixels in the prediction class that were not correctly classified. This is the most stringent measurement, and the best indicator of model performance in practice.
5.2 Ablation Experiment
In this experiment, we used each of the standard datasets, SUN RGB-D and Cityscapes, to train a model from scratch, to fine tune over ImageNet pretraining, and to fine tune over SceneNet RGB-D pretraining. In addition to the two regular training paradigms, in this experiment, models were trained for each target dataset using just the target training data (from-scratch) and for consistency with standard research practices this dataset was augmented using resizing and horizontal flipping for all three training scenarios. For Cityscapes, the predesignated train-val-test splits on the “fine” annotations were used. For SUN RGB-D, which does not have predetermined splits, the 10335 images were randomly sampled and split into a
train-val-test split. It was observed that at a batch size of 32 it would take roughly 50 epochs for convergence of training with no loss of validation performance (i.e. early stopping).
For the fine-tuning process over ImageNet, the decoder was initialized in the same manner as the network when training from scratch, and all datasets were trained upon with validation monitoring for early stopping (around 30 epochs). For the fine-tuning process over SceneNet RGB-D, only the decoder was trained with validation showing the models converging at roughly 30 epochs. Mirroring the process of the from-scratch training, SUN RGB-D and Cityscapes were used to fine tune both pretrained models at a batch size of 32, validating after each epoch with early stopping.
Tables 3-3 show the results of the ablation experiment. In the ablation experiment, the results show that for fine-tuning and testing on Cityscapes, the SceneNet RGB-D pretrained model outperforms the ImageNet pretrained model by in pixel accuracy, in mean accuracy, and in mean IoU. For fine-tuning and testing on SUN RGB-D the SceneNet RGB-D pretrained model outperforms the ImageNet pretrained model by in pixel accuracy and in mean IoU. For SUN RGB-D, the ImageNet pretrained model outperformed the SceneNet RGB-D pretrained model by in mean accuracy; however models that perform well in mean accuracy and not as well in mean IoU learn to over-fit to a subset of the most heavily represented classes in the dataset, which is consistent with our hypothesis that a model will have a harder time training over the biases of ImageNet. These results validate the hypothesis that using synthetic data to pretrain real datasets is still a viable approach for architectures like E-Net that have far fewer parameters than typical segmentation networks. Furthermore, this confirms the findings of McCormac and demonstrates that their conclusions extend beyond large parameter networks. Interestingly, the ablation results shown here are more dramatic than in McCormac in spite of the fact that they pretrained for longer on a larger model. This validates our hypothesis from Section 3.3 that transfer learning from synthetic data is more effective for smaller networks.
5.3 Robot@Home Dataset Experiment
Like SUN RGB-D, the Robot@Home dataset did not have predetermined splits, so following our methods with the ablation study, the 32937 images were randomly sampled into a -- train-val-test split.
The purpose of this experiment is to examine how the efficacy of transfer learning changes with fine tuning robot datasets of a variety of sizes. To examine the effects, we down-sample the training split into other small training sets. Specifically, keeping the validation and test sets untouched and unchanged, we create additional training datasets that are , , , , , , , , , , and of the size of the original 22,946 training set (see Table4 for more details). With those sub-sampled fine-tuning datasets, an ImageNet pretrained model and a SceneNet RGB-D pretrained model is fine-tuned for each of the 12 training datasets.
For the Robot@Home dataset, the evaluations for the two types of models show very interesting results. In Figure 2 we show the mean IoU of the SceneNet RGB-D and ImageNet models as a function of what fraction of the dataset they were trained on. SceneNet’s best model, trained on the full set of training data, outperformed ImageNet’s best model by 15.6% in mean IoU. The model pretrained with SceneNet outperforms ImageNet for every data subdivision; even more interestingly, the performance difference is such that in most cases the SceneNet pretrained model requires between anywhere from to less finetuning data than the ImageNet model to match its performance. This is an especially meaningful result because it shows that roboticists considering the time and monetary investment of acquiring and labeling more data may want to first consider investing time in sampling data from a simulation before expensively collecting more real world data.
5.4 High-Level Similarity Experiment
To explore the effects of high-level similarity between pretraining and target task datasets, the third experiment compares results of models pretrained on different synthetic data on real segmentation datasets. We ran evaluations for a four-way cross comparison to test if high-level domain similarity in two datasets impacts training, looking at the indoor navigation and autonomous driving datasets for both synthetic and real data.
For this experiment, the GTA dataset is used as the autonomous driving pretraining data. This dataset has 25K densely annotated frames sampled from the video game Grand Theft Auto (GTA), and while 25K is small for a pretraining dataset, is was sufficient for the purpose of the experiment. To make a more apt comparison, we sub-sampled a 25K training set from SceneNet RGB-D, which we refer to as SceneNet RGB-D (25K). This was used as the indoor navigation pretraining dataset. The four-way cross comparison therefore was:
SUN RGB-D pretrained on SceneNet RGB-D (25K) (similar)
Cityscapes pretrained on GTA (similar)
SUN RGB-D pretrained on GTA (not similar)
Cityscapes pretrained on SceneNet RGB-D (25K) (not similar)
|SceneNet RGB-D (25K)||0.379||0.193|
When comparing the models pretrained on the 25K synthetic image datasets, the “high-level domain similarity” pairs i.e. Cityscapes trained over the GTA dataset and SUN RGB-D trained over SceneNet RGB-D (25K), achieved mIoU scores of and respectively. It is worth noting that even though these datasets are considered to be typically far too small to be used for pretraining, these mean IoU scores are greater than those achieved by Cityscapes and SUN RGB-D models pretrained on ImageNet. For the other two cases, Cityscapes trained over SceneNet RGB-D (25k) achieved a score of mIoU and SUN RGB-D trained over GTA achieved a score of 0.205 mIoU. The results of this experiment reinforce our hypothesis that pretraining data with high-level similarity has some positive effect on performance.
It is worth noting that the SUN RGB-D model trained over GTA performed better even though the dataset domains are semantically less similar, which indicates that high-level domain similarity may not help in all cases. These results show that for two synthetic pretraining datasets of the same size from different semantic domains, models may perform better if they are pretrained on data that is similar to their goal domain. However, neither 25K frame dataset gave better performance in this experiment than the Cityscapes and SUN RGB-D models trained over the full SceneNet RGB-D training set, as can be seen in Table 5, which is further consistent with our hypothesis that for two synthetic datasets that address the same task and sample their input from the same domain, a separate factor, in this case dataset size, dominates the other, smaller differences that affect .
In this work, we investigated the potential gains of using synthetic data to augment the training process of small CNNs designed for real-time semantic segmentation in robots with small target training sets. We compared the improvements afforded by synthetic data to traditional data augmentation and transfer learning from image classification. The performance gains from pretraining with synthetic data indicate that the degree to which this closes the transfer gap for these models is reliably greater than the bias introduced by the “Sim2Real” problem, . We also documented the evolution of improvements to real-time semantic segmentation models as access to real data decreases, and showed that as dataset size decreases, the improvements from using task similar synthetic data increase exponentially compared to ImageNet.
We are currently considering how this technique might be used with other solutions to semi-supervised or weakly supervised problems, and whether there are other, more interesting ways to effectively use simulation to improve real-time segmentation, potentially as a feedback signal for model architecture search, or even more interestingly as an oracle in an active vision or lifelong learning agent.
-  K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor, K. Konolige, et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. arXiv preprint arXiv:1709.07857, 2017.
-  G. J. Brostow, J. Fauqueur, and R. Cipolla. Semantic object classes in video: A high-definition ground truth database. Pattern Recognition Letters, 30(2):88–97, 2009.
-  A. Canziani, A. Paszke, and E. Culurciello. An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678, 2016.
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele.
The cityscapes dataset for semantic urban scene understanding.In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2016.
-  R. Detry, J. Papon, and L. H. Matthies. Task-oriented grasping with semantic and geometric scene understanding. 2017 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2017.
-  A. Gaidon, Q. Wang, Y. Cabon, and E. Vig. Virtual worlds as proxy for multi-object tracking analysis. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pages 4340–4349, 2016.
-  A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, and J. G. Rodríguez. A review on deep learning techniques applied to semantic segmentation. CoRR, abs/1704.06857, 2017.
-  A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Computer Vision and Pattern Recognition (CVPR), IEEE Conf. on. IEEE, 2012.
-  I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
-  Q. Ha, K. Watanabe, T. Karasawa, Y. Ushiku, and T. Harada. Mfnet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes. 2017 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pages 5108–5115, 2017.
-  J. Hoffman, D. Wang, F. Yu, and T. Darrell. Fcns in the wild: Pixel-level adversarial and constraint-based adaptation. CoRR, abs/1612.02649, 2016.
S. James, A. J. Davison, and E. Johns.
Transferring end-to-end visuomotor control from simulation to real
world for a multi-stage task.
In S. Levine, V. Vanhoucke, and K. Goldberg, editors, Proc. of
the 1st Annual Conf. on Robot Learning
, Proc. of Machine Learning Research, pages 334–343, 2017.
-  A. Janoch. The berkeley 3d object dataset. Master’s thesis, EECS Department, UC Berkeley, 2012.
-  M. Johnson-Roberson, C. Barto, R. Mehta, S. N. Sridhar, K. Rosaen, and R. Vasudevan. Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks? In IEEE Int. Conf. on Robotics and Automation, pages 1–8, 2017.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  I. Kostavelis and A. Gasteratos. Semantic mapping for mobile robotics tasks: A survey. Robotics and Autonomous Systems, 66:86–103, 2015.
-  J. Lin, W.-J. Wang, S.-K. Huang, and H.-C. Chen. Learning based semantic segmentation for robot navigation in outdoor environment. In 9th Int. Conf. on Soft Computing and Intelligent Systems (IFSA-SCIS), pages 1–5. IEEE, 2017.
-  R. Madaan, D. Maturana, and S. Scherer. Wire detection using synthetic data and dilated convolutional networks for unmanned aerial vehicles. 2017 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pages 3487–3494, 2017.
N. Mayer, E. Ilg, P. Fischer, C. Hazirbas, D. Cremers, A. Dosovitskiy, and
What makes good synthetic training data for learning disparity and optical flow estimation?International Journal of Computer Vision, pages 1–19.
-  J. McCormac, A. Handa, A. Davison, and S. Leutenegger. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks. arXiv preprint arXiv:1609.05130, 2016.
-  J. McCormac, A. Handa, S. Leutenegger, and A. Davison. Scenenet rgb-d: Can 5m synthetic images beat generic imagenet pre-training on indoor segmentation? In Proc. of IEEE Int. Conf. on Computer Vision. IEEE, 2017.
-  J. McCormac, A. Handa, S. Leutenegger, and A. J. Davison. Scenenet rgb-d: 5m photorealistic images of synthetic indoor trajectories with ground truth. arXiv preprint arXiv:1612.05079, 2016.
-  M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In Proc of IEEE Conf. on Computer Vision and Pattern Recognition, pages 1717–1724, 2014.
-  A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147, 2016.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
-  W. Qiu and A. Yuille. Unrealcv: Connecting computer vision to unreal engine. In ECCV 2016 Workshops. Springer, 2016.
-  M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conf. on Computer Vision. Springer, 2016.
-  S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playing for data: Ground truth from computer games. In European Conf. on Computer Vision, pages 102–118. Springer, 2016.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, pages 234–241. Springer, 2015.
-  G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pages 3234–3243, 2016.
-  J. Ruiz-Sarmiento, C. Galindo, and J. González-Jiménez. Robot@ home, a robotic dataset for semantic mapping of home environments. The Int. Journal of Robotics Research, 36(2), 2017.
-  A. Shafaei, J. J. Little, and M. Schmidt. Play and learn: Using video games to train computer vision models. arXiv preprint arXiv:1608.01745, 2016.
-  N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
-  S. Song, S. P. Lichtenberg, and J. Xiao. Sun rgb-d: A rgb-d scene understanding benchmark suite. In Proc. of IEEE Conf. on computer vision and pattern recognition, pages 567–576, 2015.
-  S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser. Semantic scene completion from a single depth image. IEEE Conf. on Computer Vision and Pattern Recognition, 2017.
-  M. Teichmann, M. Weber, M. Zoellner, R. Cipolla, and R. Urtasun. Multinet: Real-time joint semantic reasoning for autonomous driving. arXiv preprint arXiv:1612.07695, 2016.
-  J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. CoRR, abs/1703.06907, 2017.
-  A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conf. on, pages 1521–1528. IEEE, 2011.
-  J. Vertens, A. Valada, and W. Burgard. Smsnet: Semantic motion segmentation using deep convolutional neural networks. 2017 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2017.
-  B. Wu, F. Iandola, P. H. Jin, and K. Keutzer. Squeezedet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving. In Computer Vision and Pattern Recognition Workshops, pages 446–454. IEEE, 2017.
-  J. Xiao, A. Owens, and A. Torralba. Sun3d: A database of big spaces reconstructed using sfm and object labels. In Proc. of the IEEE Int. Conf. on Computer Vision, pages 1625–1632, 2013.
-  J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320–3328, 2014.
-  Y. Zhang, P. David, and B. Gong. Curriculum domain adaptation for semantic segmentation of urban scenes. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2039–2049. IEEE, 2017.
-  Z. Zhang, S. Fidler, and R. Urtasun. Instance-level segmentation for autonomous driving with deep densely connected mrfs. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 2016.
-  B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.