Deep neural networks have achieved remarkable success on visual recognition tasks[10, 12]. However, it is still very challenging for deep networks to generalize on a different domain, whose data distribution is not identical with original training data. Such a problem is known as dataset bias or domain shift . For example, to guarantee safety in autonomous driving, the perception model is required to perform well under all conditions, like sunny, night, rainy, snowy, etc. However, even top-grade object detectors still face significant challenges when deployed in such varying real-world settings. Although collecting and annotating more data from unseen domains can help, it is prohibitively expensive, laborious and time-consuming. Another appealing application is to adapt from synthetic data to real data, as it can save the amount of cost and time. However, current objector detectors trained with synthetic data can rarely generalize on real data due to a significant domain distribution gap [28, 38, 40].
Adversarial domain adaptation emerges as a hopeful method to learn transferable representations across domains. It has achieved noticeable progress in various machine learning tasks, from image classification[23, 24, 27], semantic segmentation [38, 41, 48], object detection [35, 47]19, 29, 40]. According to Ben-David’s theory , the empirical risk on the target domain is bounded by the source domain risk and the domain divergence. Adversarial adaptation dedicates to learn domain invariant representation to reduce the divergence, which eventually decreases the upper bound of the empirical error on the target domain.
However, existing adversarial adaptation methods still suffer from several problems. First, previous methods [4, 8, 40] directly feed semantic features into a domain discriminator to conduct domain confusion learning. But the semantic features contain both image contents and domain attribute information. It’s difficult to make the discriminator only focusing on removing domain-specific information without inducing undesirable influence on the images contents. Second, existing adversarial adaptation methods [4, 8, 40] use domain confusion learning at one or few convolution stages to handle the distribution mismatch, which ignores the differences of domain shifts at various representation levels. For example, the first few convolution layers’ features mainly convey low-level information of local patterns, while the higher convolution layers’ features include more abstract global patterns with semantics 
. Such differences born within deep convolution neural networks naturally exhibit different types of domain shift at various convolution stages.
Motivated by this, we propose the Conditional Domain Normalization (CDN) to embed different domain inputs into a shared latent space, where the features of all different domains inputs carry the same domain attribute information. Specifically, CDN utilizes a domain embedding module to learn a domain-vector to characterize the domain attribute information, through disentangling the domain attribute out of the semantic features of domain inputs. We use this domain-vector to encode the semantic features of another domain’s inputs via a conditional normalization. Thus different domain features carry the same domain attributes information. We adopt CDN in various convolution stages to address different types of domain shift adaptively. The experiment on both real-to-real and synthetic-to-real adaptation benchmarks demonstrate that our method outperforms the-state-of-the-art adaptation methods. To summarize, our contributions are three folds: (1) We propose the Conditional Domain Normalization to bridge the domain distribution gap, through embedding different domain inputs into a shared latent space, where the features from different domains carry the same domain attribute. (2) CDN achieves state-of-the-art unsupervised domain adaptation performance on both real-to-real and synthetic-to-real benchmarks, including 2D image and 3D point cloud detection tasks. And we conduct both quantitative and qualitative comparisons to analyze the features learned by CDN. (3) We construct a large-scale synthetic-to-real driving benchmark for 2D object detection, including a variety of public datasets. The dataset and code will be released to facilitate future research.
2 Related work
2D and 3D Object Detection
are center topics in computer vision, which are crucial and indispensable for many real-world applications, such as autonomous driving. In 2D detection, following the pioneering work of RCNN, a number of object detection frameworks based on convolutional networks have been developed like Faster R-CNN  and Mask R-CNN , which significantly push forward the state of the art. In 3D detection, spanning from detecting 3d objects from 2d images , to directly generate 3D box from point cloud [31, 30, 39], abundant works has been successfully explored. All these 2D and 3D objectors have achieved remarkable success on one or few specific public datasets. However, even top-grade object detectors still face significant challenges when deployed in real-world settings. The difficulties usually arise from the changes in environmental conditions.
Domain Adaptation for Detection Domain adaptation generalizes a model across different domains, and it has been extensively explored in various machine learning tasks, spanning from image classification [2, 8, 42, 24, 27], semantic segmentation [14, 22, 41, 38, 48] to reinforcement learning[40, 29, 19]. For 2D detection adaptation,  proposes a weakly-supervised detection framework via generating pseudo-labels. Recent domain confusion learning via a domain discriminator has achieved noticeable progress in cross-domain detection.  incorporated a gradient reversal layer  into a Faster R-CNN model.  propose domain diversification to learn domain-invariant representation. [35, 47] adopt domain confusion learning on both global and local levels to align source and target distributions. Our method exhibits two differences compared with existing adversarial adaptations. First, in contrast to existing methods conducting domain confusion learning directly on semantic features, we explicitly disentangle the domain attribute out of semantic features. And this domain attribute is used to encode other domains’ features, thus different domain inputs share the same domain attribute in the feature space. Second, different from previous methods only aligning features of one or few convolutional stages, we adopt CDN to adaptively address domain shifts at various representation levels.
For 3D detection adaptation, only a few works [16, 46] has been explored to adapt object detectors across different point cloud dataset. Different from existing works [16, 46] are specifically designed for point cloud data, our proposed CDN is a general adaptation framework that adapts both 2D image and 3D point cloud object detector through the conditional domain normalization.
is a technique to modulate the neural activation using a transformation that depends on external data. It has been successfully used in the generative models and style transfer, like conditional batch normalization, adaptive instance normalization (AdaIN)  and spatial adaptive batch normalization .  propose AdaIN to control the global style of the synthesized image.  adopts a spatially-varying transformation, making it suitable for image synthesis from semantic masks. Inspired by these works, we propose Conditional Domain Normalization (CDN) to modulate one domain’s inputs condition on another domain’s attributes information. But our method exhibits significant difference with style transfer works: Style transfer works modify a content image conditioned on another style image, which is a conditional instance normalization by nature; but CDN modulates one domain’s features conditioned on the domain embedding learned from another domains’ inputs (a group of images), which is like a domain-to-domain translation. Hence we use different types of conditional normalization to achieve different goals.
We first introduce the general unsupervised domain adaptation approach in section 3.1. Then we present the proposed Conditional Domain Normalization (CDN) in section 3.2. Last we adapt object detectors with the CDN in section 3.3.
3.1 General Adversarial Adaptation Framework
Given source images and labels drawn from , and target images from target domain , the goal of unsupervised domain adaptation is to find a function that minimize the empirical error on target data. For object detection task, the can be decomposed as , where represents a feature extractor network and denotes a bounding box head network. The adversarial domain adaptation introduces a discriminator network that tries to determine the domain labels of feature maps generated by . At the same time, the backbone
is optimized to maximize the probability ofto make mistakes. Through this two-player min-max game, the final will converge to extract features that are indistinguishable for , thus domain invariant representations are learned. The overall training objective is to minimize the detection loss that consists of a classification loss and a regression loss , and min-max a adversarial loss of discriminator network
3.2 Conditional Domain Normalization
Conditional Domain Normalization is designed to embed source and target domain inputs into a shared latent space, where the semantic features from different domains carry the same domain attribute information. Formally, let and represent feature maps of source and target inputs, respectively. is the channel dimension and denotes the mini-batch size. We first learn a domain embedding vector to characterize the domain attribute of source inputs. It is accomplished by a domain embedding network
parameterized by two fully-connect layers with ReLU non-linearityas
And represents the channel-wise statistics of source feature generated by global average pooling
To embed both source and target domain inputs into a shared latent space, where source and target features carry the same domain attributes while preserving individual image contents. We encode the target features with the source domain embedding via an affine transformation as
denote the mean and variance of target feature. The affine parameters are learned by function and conditioned on the source domain embedding vector ,
For the target feature mean and variance , we calculate it with a standard batch normalization 
where and denotes -th channel of and . Finally, we have a discriminator to supervise the encoding process of domain attribute as
where and are generated by .
Discussion CDN exhibits a significant difference compared with existing adversarial adaptation works. As shown in Fig. 1, previous methods conduct domain confusion learning directly on semantic features to remove domain-specific factors. However, the semantic features contain both domain attribute and image contents. It is not easy to enforce the domain discriminator only regularizing the domain-specific factors without inducing any undesirable influence on image contents. In contrast, we disentangle the domain attribute out of the semantic features via conditional domain normalization. And this domain attribute is used to encode other domains’ features, thus different domain features carry the same domain attribute information.
3.3 Adapting Detector with Conditional Domain Normalization
Convolution neural network’s (CNN) success in pattern recognition has been largely attributed to its great capability of learning hierarchical representations. More specifically, the first few layers of CNN focus on low-level features of local pattern, while higher layers capture semantic representations. Given this observation, CNN based object detectors naturally exhibit different types of domain shift at various levels’ representations. Hence we incorporate CDN into different convolution stages in object detectors to address the domain mismatch adaptively, as shown in Fig.2.
Coincident to our analysis, some recent works [35, 47] empirically demonstrate that global and local region alignments have different influences on detection performance. For easy comparison, we refer to the CDN located at the backbone network as global alignment, and CDN in the bounding box head networks as local or instance alignment.
As shown in Fig. 2, taking faster-RCNN model  with ResNet  backbone as an example, we incorporate CDN in the last residual block at each stage. Thus the global alignment loss can be computed by
where and denote -th layer’s source feature and the encoded target feature, and represents the corresponding domain discriminator parameterized by .
As for bounding box head network, we adopt CDN on the fixed-size region of interest (ROI) features generated by ROI pooling . Because the original ROIs are often noisy and the quantity of source and target ROIs are not equal, we randomly select ROIs from each domain. and represent the quantity of source and target ROIs after non-maximum suppression (NMS). Hence we have instance alignment regularization for ROI features as
The final objective of conditional domain normalization becomes
where is a weight to balance the global and local alignment regularization.
Discussion Existing adversarial domain adaptation methods try to handle the domain shift at one or a few specific convolution stages. However, the domain mismatch at different representation levels is not identical due to the nature of deep convolutional networks. As shown in Fig. 2, we incorporate CDN at various representation levels to adaptively align the source and target domain distributions.
We evaluate CDN on various real-to-real and synthetic-to-real adaptation benchmarks. The real-to-real adaptation including Cityscapes to Foggy Cityscapes and KITTI to Cityscapes. In synthetic-to-real adaptation, we report results on adapting from different synthetic datasets to real BDD100K dataset, including Virtual KITTI, Synscapes and SIM10K. We also report results on synthetic-to-real adaptation on 3D point cloud object detection. Mean average precision (mAP) with an intersection-over-union (IOU) threshold of is reported for 2D detection experiments. We use Source and Target to represent the results of supervised training on source and target domain, respectively.
Cityscapes  is a European traffic scene dataset, which contains images for training and images for testing.
KITTI  contains images collected from different urban scenes, which includes 2D RGB images and 3D point cloud data.
Synscapes  is a synthetic dataset of street scene, which consists of images created with a photo-realistic rendering technique.
SIM10K  is a street view dataset generated from the realistic computer game Grand Theft Auto V (GTA-V). It has training images and the same categories as in Cityscapes.
PreSIL  is synthetic point cloud dataset derived from GTA-V, which consists of frames of high-definition images and point clouds.
BDD100K  is a large-scale dataset (contains 100k images) that covers diverse driving scenes. It is a good representative of real data in the wild.
4.2 Implementation Details
We train the Faster R-CNN 
model for 12 epochs on all experiments. The model is optimized by SGD with multi-step learning rate decay. SGD uses the learning rate of 0.00625 multiplied by the batchsize, and momentum of 0.9. We adopt CDN layer in all convolution stages, including the backbone and bounding box network. All experiments use sync BN with a batchsize of . is set as 0.4 by default in all experiments. On Cityscapes to Foggy Cityscapes adaptation, we follow [35, 47] to prepare the train/test split, and use an image shorter side of 512 pixels. On synthetic-to-real adaptation, for a fair comparison, we randomly select images for training and for testing, for all synthetic datasets and BDD100K dataset. For 3D point cloud detection, we use PointRCNN  model with same setting as . We incorporated the CDN layer in the point-wise feature generation stage (global alignment) and 3D ROIs proposal stage (instance alignment).
5 Experimental Results and Analysis
5.1 Results on Cityscapes to Foggy Cityscapes
We compare CDN with the state-of-the-art methods in Table 1. Following [35, 47], we also report results using Faster R-CNN model with VGG16 backbone. As shown in Table 1, CDN outperforms previous state-of-the-art methods by a large margin of mAP. The results demonstrate the effectiveness of CDN on reducing domain gaps. A detailed comparison of different CDN settings can be found at the ablation study 7. As shown in Fig. 3, our method exhibits good generalization capability under foggy weather conditions.
5.2 Results on KITTI to Cityscapes
Different camera settings may influence the detector performance in real-world applications. We conduct the cross-camera adaptation on KITTI to Cityscapes. Table 4 shows the adaptation results on car category produced by Faster R-CNN with VGG16. Global and Instance represent global and local alignment respectively. The results demonstrate that CDN achieves mAP improvements over the state-of-the-art methods. We can also find that instance feature alignment contributes to a larger performance boost than global counterpart, which is consistent with previous discovery [35, 47].
5.3 Results on SIM10K to Cityscapes
Following the setting of , we evaluate the detection performance on car on SIM10K-to-Cityscapes benchmark. The results in Table 4 demonstrate CDN constantly performs better than the baseline methods. CDN with both global and instance alignment achieves mAP on validation set of Cityscapes, which outperforms the previous state-of-the-art method by mAP.
5.4 Results on Synthetic to Real Data
To thoroughly evaluate the performance of the state-of-the-art methods on synthetic to real adaptation, we construct a large-scale synthetic-to-real adaptation benchmark on various public synthetic datasets, including Virtual KITTI, Synscapes and SIM10K. “All” represents using the combination of 3 synthetic datasets. Compared with SIM10K-to-Cityscapes, the proposed benchmark is more challenging in terms of much larger image diversity in both real and synthetic domains. We compare CDN with the state-of-the-art method SWDA in Table 2. CDN consistently outperforms SWDA under different backbones, which achieves average mAP and mAP improvements on Faster-R18 and Faster-R50 respectively. Using the same adaptation method, the detection performance strongly depends on the quality of synthetic data. For instance, the adaptation performance of SIM10K is much better than Virtual KITTI. Some example predictions produced by our method are visualized in Fig. 3.
5.5 Adaptation on 3D Point Cloud Detection
We evaluate CDN on adapting 3D object detector from synthetic point cloud (PreSIL) to real point cloud data (KITTI). PointRCNN  with backbone of PointNet++  is adopted as our baseline model. Following standard metric on KITTI benchmark , we use Average Precision(AP) with IOU threshold 0.7 for car and 0.5 for pedestrian / cyclist. Table 5.5 shows that CDN constantly outperforms the state-of-the-art method PointDAN  across all categories, with an average improvement of AP. We notice that instance alignment contributes to a larger performance boost than global alignment. It can be attributed by the fact that point cloud data spread over a huge 3D space but most information is stored in the local foreground points (see Fig. 5.5).
6.1 Visualize and Analyze the Feature Maps
Despite the general efficiency on various benchmarks, we are also interested in the underlying principle of CDN. We first visualize the features of different domain images. As shown in Fig. 4, we can not easily distinguish the domain label from feature maps alone, suggesting the features from synthetic and real domain carry the same domain attribute. Besides, the same category objects across synthetic and real domain share similar activation patterns and contours, indicating the our method well preserves the feature semantics.
Furthermore, we compute Fréchet Inception Distance (FID) to quantitatively investigate the difference between source and target features. FID has been a popular metric to evaluate the style similarity between two groups of images in GANs. Lower FID score indicates a smaller style difference. For easy comparison, we normalize the FID score to by dividing the maximum score. As shown in Table 6.1, the feature learned with CDN achieves significantly smaller FID score compared with feature learned on source domain only, suggesting CDN effectively reduces the domain gap in the feature space. Obviously, supervised joint training on source and target data gets the smallest FID score, which is verified by the best detection performance achieved by joint training. As shown in Fig. 6.1, synthetic-to-real has larger FID score than real-to-real dataset, since the former owns larger domain gaps.
6.2 Analysis on Domain Discrepancy
We adopt symmetric Kullback–Leibler divergence to investigate the discrepancy between source and target domain in feature space. To simplify the analysis, we assume source and target features are drawn from the multivariate normal distribution. The divergence is calculated with the Res5-3 features and plotted in log scale. Fig.5 (a) and (c) show that the domain divergence continues decreasing during training, indicating the Conditional Domain Normalization keeps reducing domain shift in feature space. Benefiting from the reduction of domain divergence, the adaptation performance on the target domain keeps increasing.
(b)(d) shows the t-SNE plot of instance features extracted by a Faster R-CNN model incorporated with CDN. The same category features from two domains group in tight clusters, suggesting source and target domain distributions are well aligned in feature space. Besides, features of different categories own clear decision boundaries, indicating discriminative features are learned by our method. These two factors contribute to the detection performance on target domain.
7 Ablation Study
For the ablation study, we use a Faster R-CNN model with ResNet-18 on SIM10K to BDD100K adaptation benchmark, and a Faster R-CNN model with VGG16 on Cityscapes-to-Foggy Cityscapes adaptation benchmark. G and I denote adopting CDN in the backbone and bounding box head network, respectively.
Adopting CDN at different convolution stages. Fig. 6(a) compares the results of Faster R-CNN models adopting CDN at different convolution stages. We follow  to divide ResNet into stages. Bbox head denotes the bounding box head network. From left to right, adding more CDN layers keeps boosting the adaptation performance on both benchmarks, benefiting from adaptive distribution alignments across different levels’ representation. It suggests that adopting CDN in each convolution stage is a better choice than only aligning domain distributions at one or two specific convolution stages.
Comparing with existing domain adaptation frameworks adopting CDN. Fig. 6(b) shows the results of adopting CDN layer in existing adaptation methods like SWDA  and SCDA . Directly adopting CDN in SWDA and SCDA can bring average mAP improvements on two adaptation benchmarks, suggesting CDN is more effective to address domain shifts than traditional domain confusion learning. It can be attributed to that CDN disentangle the domain-specific factors out of the semantic features via learning a domain-vector. Leveraging the domain-vector to align the different domain distributions can be more efficient.
Compare domain embedding with semantic features. In Eq. 7, we can either use semantic features or domain embedding as inputs of discriminator. Fig. 6(c) compares the adaptation performance of using semantic features with using domain embedding. Although semantic features can improve the performance over baseline, domain embedding consistently achieves better results than directly using semantic features. Suggesting the learned domain embedding well captures the domain attribute information, and it is free from some undesirable regularization on specific image contents.
Value of In Eq. 10, we use controls the balance between global and local regularization. Fig. 7 (left) shows the influence on adaptation performance by different . Because object detectors naturally focus more on local regions, we can see stronger instance regularization largely contributes to detection performance. In our experiments, between 0.4 and 0.5 gives the best performance.
Scale of target domain dataset Fig. 7 middle/right quantitatively investigate the relation between real data detection performance and percentage of synthetic data used for training. “All” means to use the combination of 3 different synthetic datasets. The larger synthetic dataset provides better adaptation performance, on both 2D image and 3D point cloud detection.
We present the Conditional Domain Normalization (CDN) to adapt object detectors across different domains. CDN aims to embed different domain inputs into a shared latent space, where the features from different domains carry the same domain attribute. We incorporated CDN into various convolution stages of object detector network to adaptively address the domain shift of different levels’ representation. Extensive experiments demonstrate the effectiveness of CDN on adapting object detectors, including 2D image and 3D point cloud detection tasks. And both quantitative and qualitative comparisons are conducted to analyze the features learned by our method.
A1. Interpret the Domain Embedding
Conditional domain normalization disentangles the domain-specific attribute out of the semantic features from one domain via a learning a domain embedding to characterize the domain attribute information. In this section, we interpret the learned domain embedding via reconstructing the RGB images from the features. As shown in Fig. 10, we first built a decoder network upon the backbone network of fixed weights. The parameters of the backbone network are obtained in the adaptation training (see Eq. 1). The decoder network mostly mirrors the backbone network, with all pooling layers replaced by nearest up-sampling and all normalization layers removed. The decoder network is trained to reconstruct the RGB images from the features extracted by the backbone,
For contrast analysis, only single domain images are used to train the decoder network, i.e. the decoder for Cityscapes experiment is trained on Foggy Cityscapes images, the decoder for SIM10K experiment is trained on SIM10K images. After we got a trained decoder network, we use it to reconstruct the RGB image from features encoded with the domain embedding.
Fig. 8 shows the effect of domain embedding learned in Cityscapes to Foggy Cityscapes adaptation experiments (Section 5.1). The top row shows the inputs of Foggy Cityscapes; the middle row shows the reconstructed results from features of Foggy Cityscapes inputs; the bottom row is reconstructed results from Foggy Cityscapes features encoded with the domain embedding learned on Cityscapes. With the help of the domain embedding learned on Cityscapes, the reconstructed results from Foggy Cityscapes features no longer exhibit foggy characteristics, suggesting that both Cityscapes and Foggy Cityscapes inputs are embedded into a shared latent space, where their features carry the same domain attribute. Given the domain gap bridged, the object detector supervised trained on Cityscapes also works on Foggy Cityscapes.
Fig. 9 and 11 show the reconstructed results from synthetic data’s features encoded with domain embedding of real data (BDD100K), which are learned in SIM10K-to-BDD100K and Synscapes-to-BDD10K adaptation experiments, respectively (see Section 5.4). Without the domain embedding of real data, the reconstructed images (middle row of Fig 9 and 11) still exhibit characteristic of CG (computer graphic), that look identical to the original images. When the same features of original inputs are encoded with the domain embedding of real data, the reconstructed images (bottom row of Fig 9 and 11) obviously becomes more realistic. For example, the color of the sky, the texture of the road and objects in the reconstructed images look similar to the real images. It proves that the learned domain embedding well captures the domain attribute information of real data, and it can be used to effectively translate the synthetic images towards real images.
A.2 More Qualitative Results
Fig. 12 shows 3D point cloud detection results on the KITTI dataset.
-  (2010) A theory of learning from different domains. Machine learning 79 (1-2), pp. 151–175. Cited by: §1.
-  (2017) Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3722–3731. Cited by: §2.
-  (2015) 3d object proposals for accurate object class detection. In Advances in Neural Information Processing Systems, pp. 424–432. Cited by: §2.
-  (2018) Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3339–3348. Cited by: §1, §2, Table 1, Table 4.
The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223. Cited by: §4.1.
-  (2016) A learned representation for artistic style. arXiv preprint arXiv:1610.07629. Cited by: §2.
-  (2016) Virtual worlds as proxy for multi-object tracking analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4340–4349. Cited by: §4.1.
Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning, pp. 1180–1189. Cited by: §1, §2.
-  (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
-  (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587. Cited by: §1, §2.
-  (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §2.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §3.3, §7.
-  (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626–6637. Cited by: §6.1.
-  (2017) Cycada: cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213. Cited by: §2.
-  (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510. Cited by: §2.
-  (2019) Precise synthetic image and lidar (presil) dataset for autonomous vehicle perception. In 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 2522–2529. Cited by: §2, §4.1.
-  (2018-06) Cross-domain weakly-supervised object detection through progressive domain adaptation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, Table 1.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §3.2.
-  (2019) Sim-to-real via sim-to-sim: data-efficient robotic grasping via randomized-to-canonical adaptation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12627–12637. Cited by: §1, §2.
-  (2016) Driving in the matrix: can virtual worlds replace human-generated annotations for real world tasks?. arXiv preprint arXiv:1610.01983. Cited by: §4.1.
-  (2019) Diversify and match: a domain adaptive representation learning paradigm for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12456–12465. Cited by: §2, Table 1.
-  (2019) Bidirectional learning for domain adaptation of semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6936–6945. Cited by: §2.
-  (2019) Compound domain adaptation in an open world. External Links: Cited by: §1.
-  (2018) Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems, pp. 1640–1650. Cited by: §1, §2.
-  (2019) Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2337–2346. Cited by: §2.
-  (2018) Megdet: a large mini-batch object detector. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6181–6189. Cited by: §4.2.
-  (2019-10) Moment matching for multi-source domain adaptation. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §1, §2.
-  (2018) Syn2Real: a new benchmark forsynthetic-to-real visual domain adaptation. CoRR abs/1806.09755. Cited by: §1.
-  (2018) Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 1–8. Cited by: §1, §2.
-  (2018) Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 918–927. Cited by: §2.
-  (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pp. 5099–5108. Cited by: §2, §5.5.
-  (2019) PointDAN: a multi-scale 3d domain adaption network for point cloud representation. In Advances in Neural Information Processing Systems, pp. 7190–7201. Cited by: §5.5, §5.5.
-  (2009) Dataset shift in machine learning. The MIT Press. Cited by: §1.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §2, §3.3, §3.3, §4.2.
-  (2019-06) Strong-weak distribution alignment for adaptive object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §3.3, §4.2, §5.1, §5.2, §5.3, §5.4, Table 1, Table 2, Table 4, §7.
-  (2018-09) Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision 126 (9), pp. 973–992. External Links: Cited by: §4.1.
-  (2019) Domain adaptation for vehicle detection from bird’s eye view lidar point cloud data. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 0–0. Cited by: §5.5.
-  (2018) Learning from synthetic data: addressing domain shift for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3752–3761. Cited by: §1, §1, §2.
-  (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–779. Cited by: §2, §4.2, §5.5.
-  (2017) Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 23–30. Cited by: §1, §1, §1, §2.
-  (2019) Domain adaptation for structured output via discriminative representations. arXiv preprint arXiv:1901.05427. Cited by: §1, §2.
-  (2017) Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176. Cited by: §2.
-  (2018) Synscapes: a photorealistic synthetic dataset for street scene parsing. arXiv preprint arXiv:1810.08705. Cited by: §4.1.
-  (2015) Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579. Cited by: §1, §3.3.
-  (2018) Bdd100k: a diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687. Cited by: §4.1.
-  (2018) A lidar point cloud generator: from a virtual world to autonomous driving. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, pp. 458–464. Cited by: §2.
-  (2019-06) Adapting object detectors via selective cross-domain alignment. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §3.3, §4.2, §5.1, §5.2, Table 1, Table 4, §7.
-  (2018) Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 289–305. Cited by: §1, §2.