are unfortunately hampered by their dependence on massive amounts of training examples. Ad-hoc collection and annotation of training data for various computer vision applications is cumbersome and costly. 3D CAD simulation is a promising solution to this problem[30, 18, 35, 36, 39]. Rendering images from freely available CAD models can potentially produce an infinite number of training examples from many viewpoints and for almost any object category. Previous work  utilized computer graphics (CG) technique to render 2D CAD-synthetic images and consequently train deep CNN-based classifiers on them. However, their CAD-synthetic images are highly non-realistic due to the absence of natural object texture and background. More specifically, they exhibit the following problems: 1) large mismatch between foreground and background, 2) higher contrast between the object edges and the background, 3) non-photorealistic scenery. These problems inevitably lead to a significant domain shift between CAD-synthetic and real images.
To minimize the domain shift, domain adaptation methods have been proposed to align two domains in manifold space [11, 7] or in deep feature space [20, 22, 38]. These algorithms bridge the domain gap between real-image domains. However, the domain shift between synthetic and real domains is much larger than that between two real-image domains. CAD-to-real adaptation methods [26, 5] have been proposed but they only align the viewpoint of specific indoor categories and cannot be directly applied to recognition systems in the wild.
Our main idea is to incorporate domain adaptation algorithm into generative networks. Generative neural networks have recently been proposed to create novel imagery that shares common properties with some given images, such as content and style , similarity in feature space [19, 12, 42, 24, 27], etc. However, these approaches have several limitations for use in domain adaptation. For example, Generative Adversarial Nets (GANs)  and style transfer approaches [9, 8] can generate images but are not designed for domain adaptation. Coupled GANs  only handle domain shifts between small images (2828 pixel resolution). Conditional GANs 
can learn image-to-image translation but need paired training data that are costly to obtain, CAD models and corresponding natural images.
To overcome the limitations of the aforementioned approaches, we propose a Deep Generative Correlation Alignment Network (DGCAN) to bridge the domain discrepancy between CAD-synthetic and real images. Our work is primarily motivated by [30, 9, 38]. As shown in Figure 1, we generate novel images by matching the convolutional layer features with those of a content CAD-synthetic image and the feature statistics of a real image containing a background scene. Unlike neural style , the goal is not to create an artistic effect but rather to adapt the CAD-synthetic data to match the statistics of real images and thus improve generalization. To this end, we employ the correlation alignment (CORAL) loss  for adaptation. However, instead of learning to align features, we generate images whose feature correlations match the target real-image domain.
Our synthesized results reveal that DGCAN can satisfactorily blend the contour of specific objects (from CAD-synthetic images) with natural textures from real images. Although the generated images are not fully photorealistic, they appear to have more natural statistics to the deep network, improving its performance. Extensive experiments on the PASCAL and Office dataset show that our approach yields a significant performance boost compared to the previous state-of-the-art methods [30, 37, 38, 7, 11, 20, 22, 9].
The contributions of this paper can be summarized as follows.
We propose Deep Generative Correlation Alignment Network ( DGCAN) to synthesize CAD objects contour from the CAD-synthetic domain with natural textures from the real image domain.
We explore the effect of applying the content and CORAL losses to different layers and determine the optimal configuration to generate the most promising stimuli.
We empirically show the effectiveness of our model over several state-of-the-art methods by testing on real image datasets.
2 Related Work
CAD Simulation CAD simulation has been extensively used by researchers since the early days of computer vision . 3D CAD models have been utilized to generate stationary synthetic images with variable object poses, textures, and backgrounds . Recent usage of CAD simulation has been extended to multiple vision tasks, e.g. object detection [30, 26]
, pose estimation[18, 35, 36, 39], robotic simulation , semantic segmentation . However, for many tasks, CAD-synthetic images are too low-quality due to the absence of realistic backgrounds and texture. To mitigate this drawback,  proposes to directly add auxiliary texture and background to the rendered results, with the help of commercial software (e.g. AutoDesk 3ds MAX111http://www.autodesk.com/store/products/3ds-max). However, this method introduces new problems, such as unnatural positioning of objects (e.g. floating car above the road), high contrast between object boundaries and background, etc. Our approach tackles these problems by synthesizing novel imagery with DGCAN and can generate images with natural feature statistics.
DCNN Image Synthesis Deep convolutional neural networks learn distributed, invariant and nonlinear feature representations from large-scale image repositories . Generative Adversarial Networks (GANs)  and their variations [27, 29] aim to synthesize images that are indistinguishable from the distribution of images in their training set. However, training GANs is difficult and often leads to oscillatory behavior. Style transfer  synthesizes novel stimuli by aligning the conv layer features and Gram Matrices of the features. In this way, the synthesized image simultaneously preserves the arrangement of a content image (often a normal photograph) and the colours and subtle local structures of a style image (often an artist’s work). Our approach is inspired by style transfer , but is geared towards adapting a set of CAD-synthetic images to real image domain with a domain adaptation loss.
Domain Adaptation Domain shift results in a significant performance degradation when recognition systems are trained on one domain (source) and tested on another (target). Shallow domain adaptation algorithms aim to bridge the two feature distributions via mappings learned either by minimizing a distribution distance metric [2, 37], or by projecting the feature distributions to a common low-dimensional manifold [13, 11, 21]. Deep domain adaptation methods address the domain shift by adding one or multiple adaptation layers and losses [41, 40, 20, 38], or use an adversarial network to match the source distribution to target [40, 19]. All of the aforementioned methods follow the paradigm of aligning the source domain and target domain in feature space. In contrast, we take a generative approach to combine the statistics of target domain images with the content of source domain images. Recent proposed generative models [3, 19] adapt two domains by adversarial losses. However, these methods only generate small images. Our model can generate large images with arbitrary resolution.
Suppose we are given labeled source-domain CAD-synthetic (image, label) pairs , and target-domain real images . We assume that the target domain is unlabeled, so object classifiers can only be trained on . However, their performance will degrade when testing on due to the domain discrepancy. Our aim is to synthesize a labeled intermediate dataset , such that each contains a similar object shape and contour with and similar local pattern, color, and subtle structure (“style” as illustrated in ) with some random .
To generate from and , the most straightforward method is to average the two images. Traditional computer vision blending approaches, such as half-half alpha blending or pyramid blending lead to image artifacts that contribute to the domain shift. Previous CG-based method  applied real image background and texture to CAD models, leading to the problems illustrated in Section 1. Instead, we propose to align the generated to and in the DCNN feature space, as shown in Figure 2. Analogously to , our model synthesizes an image from with . The generation is guided by two losses, one to ensure the object contour stays the same, and the other to ensure the image has similar low-level statistics with real images.
3.1 Deep Convolutional Neural Network
We base our approach on the VGG-16  network which consists of 13 convolutional layers (conv1_1-conv5_3), 3 fully connected layers (fc6-fc8) and 5 pooling layers (pool1-pool5). The convolutional layers consist of a set of learnable kernels. Each kernel is convolved with the input volume to compute hidden activations during the forward pass and its parameters are updated through a back-propagation pass. We denote as DCNN’s layer’s representation matrix, as the dimension of and as value of .
3.2 Shape Preserving loss
To preserve the shape information of CAD-synthetic images, we propose to use the loss in feature space as follows
where , ; is the loss weight of layer in DCNN feature space; is the collection of convolutional layers which the loss is applied to; , where is the channel number of layer’s feature, and is the length of feature in a single channel.
The derivative of this loss with respect to the activations in a particular layer can be computed by:
This gradient can be back-propagated to update the pixels while synthesizing .
3.3 Naturalness loss
Networks trained on CAD images will not work well on input real images because of the mismatch in low-level image statistics such as textures, edge contrast, color, etc. To align the low-level texture statistics of the generated images to the real image domain, we propose to employ the CORAL loss. Correlation Alignment (CORAL) was first devised by  to match the second-order statistics of feature distributions for domain adaptation. It is derived by minimizing the domain discrepancy with squared Frobenius norm , where
are the covariance matrices of feature vectors from source domain and target domain, respectively. This problem is equivalent to solving.
Inspired by , we define the CORAL loss as
where , ; is the loss weight of layer; is the collection of convolutional layers that the loss is applied to; is the covariance matrix of layer’s activation; denotes the Frobenius distance.
Analogous to , the covariance matrices are given by:
where , 1 is a column all-one vector, and is the number of feature channels in layer.
The derivative of the CORAL loss with respect to a particular layer
can be calculated with chain rule:
Our final method combines the loss functions defined by equation1 and equation 3. We start from an image and pre-process it by adding a random perturbation , where . We then feed the image forward through DGCAN and compute the loss with respect to and CORAL loss with respect to . The back-propagated gradient thus guides the image synthesis process. Hence, the synthesized image is the output of the function:
where denotes the total loss of DGCAN, and denotes the trade-off weight between loss and the CORAL
loss. The hyperparameteris set through cross validation.
Our experiments include two parts. First, we apply DGCAN to the CAD-synthetic dataset provided by  to synthesize adapted DGCAN-synthetic images. Second, we train off-the-shelf classifiers on the DGCAN-synthetic images and test on the PASCAL 2007  and Office  datasets. We implement our model with the Caffe  framework. Datasets (both CAD-synthetic and DGCAN-synthetic), code and experimental configurations will be made available publicly.
4.1 Generating Adapted Images
As shown in Figure 2, while generating the DGCAN-synthetic dataset, we set CAD-synthetic images as and real images downloaded from the Google image search engine as .
CAD-Synthetic Dataset The CAD-synthetic dataset in  was rendered from 3D CAD models for zero-shot or few-shot learning tasks. The dataset contains 6 subsets with different configurations (i.e. RR-RR, W-RR, W-UG, RR-UG, RG-UG, RG-RR). The process of rendering the dataset (we refer the reader to  for more details) can be summarized as follows: (1) collecting 3D-CAD models from large-scale on-line repositories (Google Sketchup, Stanford 3D ShapeNet222http://shapenet.cs.stanford.edu/), (2) selecting image cues (background, texture, pose, etc.), (3) rendering synthetic images with AutoDesk 3ds Max. In our experiments, we only adopt images with white background because other subsets suffer from the issues described in Section 1.
Parameter tuning To determine the optimal configuration for , and , we exhaustively apply , to different conv layers and vary from on a small validation dataset.
Results and Analysis A representative subset of rendered results with different setting for , and are shown in Figure 3. The left plot shows the effect of different configurations of and . The results demonstrate that when is applied to lower conv layers, DGCAN can generate more distinct contour of the object from CAD-synthetic data and when is applied to higher conv layers, DGCAN can generate more structured texture. Empirical evidence  shows that this effect mainly stems from two factors. ason is the increasing receptive field size, given the receptive field sizes of VGG-16’s conv1_2, conv2_2, conv3_2, conv4_2, conv5_2 are 5, 14, 32, 76 and 164, respectively. The second factor is the increasing feature complexity along the network hierarchy.
To find the optimal trade off ratio , we synthesize images with ranging from to . The right plot in Figure 3 reveals when ( to ratio) is small, the object contour will dominate the background texture cues. On the contrary, when is increased to , the contour of the object gradually fades away and more structured textures from the real image emerge.
We randomly select some rendered results from three categories (“aeroplane”, “potted plant”, “sofa”) and show them in the left plot of Figure 4. The images are generated with the configuration , () and . The results demonstrate that DGCAN-synthetic imagery preserves clear object contours from the CAD-synthetic images and synthesized textures from realistic domain.
We further leverage the DCNN visualization tool provided by  to show that DGCAN-synthetic images share more similarities with real images.  provides an effective tool to reconstruct an image from its representation. We compare the reconstruction results of bird images from three domains, DGCAN-synthetic, CAD-synthetic and real domains. In the right subplot of Figure 4
, the odd rows show the original bird images, and their corresponding reconstructions are located in the even rows. From the plots, we can observe recognizable bird shapes from the reconstructed images of DGCAN-synthetic images. However, the birds in the reconstructed images of CAD-synthetic domain are lost in noisy color patches. These visualization results demonstrate that the DCNN can better recover category information from DGCAN-synthetic images than from CAD-synthetic images.
4.2 Domain Adaptation Experiments
In this section, we evaluate our approach on CAD-to-real domain adaptation tasks, using object classification as the application. The goal is to generate adapted CAD images using our approach, then train deep object classifiers on the data, and test on real-image benchmarks. We compare the effectiveness of our model to previous methods [17, 38, 37, 11, 20, 22, 9, 7] on two benchmarks: PASCAL VOC 2007  and the Office  dataset.
4.2.1 Experiments on PASCAL VOC 2007
Train/Test Set Acquisition As a training set, we generate 1080 images with DGCAN from the W-UG subset of CAD-synthetic dataset . These images are equally distributed into 20 PASCAL categories. For evaluation, we crop 14976 patches from 4952 images in the test subset of PASCAL VOC 2007 dataset . The patches are cropped using annotated object bounding boxes and each patch contains only one object.
. In the training process, the networks are initialized with the parameters pre-trained on ImageNet. We replace the last output layer with a 20-way classifier and randomly initialize it with
. We use mini-batch stochastic gradient descent (SGD) with a momentum of 0.9 to finetune all the layers. The base learning rate isand the weight decay is . Specifically, we set dropout ratios for fc6 and fc7 of “AlexNet” to 0.5. We report the results when the training iteration reaches 40k.
To compare with state-of-the-art domain adaptation methods, we use the following baselines. CORAL  aligns the feature distribution of source domain () to target domain (. DCORAL (Deep CORAL)  incorporates CORAL as a loss layer in the DCNN. SA (Subspace Alignment) 
proposes a mapping function to align the subspace of the source domain with the target domain. The subspace is described by the eigenvectors of features. GFK (Geodesic Flow Kernel)  models domain discrepancy by integrating numerous subspaces which characterize changes in geometric and statistical properties. Based on these subspaces, a geodesic curve is constructed, a geodesic flow kernel is computed and a kernel-based classifier is trained. DAN (Deep Adaptation Network)  and RTN (Residual Transfer Network)  train deep models with the Maximum Mean Discrepancy  loss to align the feature distribution of two domains.
For equal comparison, we take the same 1080 W-UG images which we utilized to generate our DGCAN-synthetic dataset as the source domain for domain adaptation algorithms. For style-transfer method , we use the same CAD-synthetic (content) images and real images (style) to generate new dataset. For SA  and GFK , we first extract deep features and then apply their model to get the baseline results. For all the baselines, we use the code and experimental settings provided by the authors to run all the experiments.
Results and Analysis The per-category accuracies of AlexNet classifier are presented in Table 1, demonstrating that our approach outperforms competing methods. After applying our approach to CAD-synthetic data, the overall accuracy rises from 18.48% to 27.46%. Additionally, Table 1 shows that our approach gains a clear advantage over the state-of-the-art domain adaptation algorithms [37, 38, 7, 11, 20, 22] and the style-transfer baseline . The latter result reveals that aligning the covariance matrix works better than aligning the Gram matrix in the synthetic-to-real domain adaptation scenario. In Table 2, we further show VGG and ResNet classifiers trained on the adapted dataset outperform the CAD-synthetic method  and domain adaptation methods [37, 38, 7, 11]. With our model, the accuracies of VGG and ResNet classifiers rise from 10.3% to 22.92% and from 13.13% to 20.59%, respectively. We notice that AlexNet achieves the best overall performance. Given that VGG and ResNet have more parameters than AlexNet, we assume that they are overfitting to the generated synthetic dataset, which causes poor generalization to real-image domain.
We visualize how the inter-class confusion mode and the feature embeddings have changed after applying our model, as shown in Figure 5. The confusion matrices on the left show that AlexNet  trained on the CAD-synthetic dataset () tends to mistake other categories for “boat” and “train”. This phenomenon disappears after applying our model to the CAD-synthetic dataset, as illustrated by the second subplot on the left of Figure 5. This effect is partially explained by the texture synthesizing ability of DGCAN, which provides additional discriminative cues to the CAD-synthetic images. At the feature level, we visualize layer fc7’s feature embeddings by t-SNE  before and after applying our model, as illustrated in the right two subplots of Figure 5. The t-SNE  visualization results clearly show that the features of realistic and synthetic images are better aligned after applying our model to CAD-synthetic images.
4.2.2 Experiments on the Office Dataset
We also evaluate our method on the Office benchmark , which was introduced specifically for studying the effect of domain shift on object classification. We evaluate the domain generalization ability of our approach by adapting the CAD domain to the real-image Amazon domain (images downloaded from amazon.com) in the Office dataset.
|GT||back pack||bike helmet||bookcase||bookcase||bookcase||calculator||calculator||computer||file cabinet|
|redprinter||redprinter||redbottle||redcomputer||redfile cabinet||redkeyboard||redphone||redprinter||red bookcase|
|greenback pack||greenbike helmet||greenbookcase||greenbookcase||greenbookcase||greencalculator||greencalculator||greencomputer||greenfile cabinet|
|GT||keyboard||laptop||laptop||letter tray||mb phone||monitor||trash can||letter tray||desk lamp|
|redcomputer||redkeyboard||rednotebook||redmonitor||redcalculator||greenmonitor||greentrash can||redprinter||red bike|
|greenkeyboard||greenlaptop||greenlaptop||greenletter tray||greenmb phone||redkeyboard||redring binder||redpunchers||redbike|
Train/Test Set Acquisition We apply our model to the 775 CAD-synthetic images provided by  to generate the training dataset. These CAD-synthetic images are rendered to train object detectors for Office dataset. To collect (natural images), for each category, we downloaded 45 images from Google by searching the category’s name. The test set comes from the Office Dataset , which has the same 31 categories (backpack, cups, ) in three domains, Amazon, Webcam (collected by webcam) and DSLR (collected by DSLR camera). Specifically, we use the Amazon set (2817 images) as the test set in our experiments as it is the most challenging setting, and because this domain significantly differs from PASCAL (see Table 4 for examples).
Baselines We compare our approach to two sets of baselines, with one set trained on another real image domain in Office (Webcam domain, 795 images) and the other trained on the CAD-synthetic domain (775 images). In both sets, we compare to the basic AlexNet  model (no adaptation) and domain adaptation algorithms [7, 11, 37, 20].
Results The results demonstrate that our approach performs strongly on this benchmark, as can be seen in Table 3. The overall classification accuracy of our model is 49.91% versus 44.69% for a classifier trained on the CAD-synthetic domain directly. The table also shows that DGCAN beats other baselines [7, 11, 37] and classifiers trained on real images (Webcam domain), and is slightly better than the domain alignment network (DAN) of . We note here that this and other unsupervised domain adaptation baselines make use of the test data to train the alignment models (transductive training). On the other hand, our method did not use the test images for training, but performs well nonetheless.
We further show that our model is complementary with transductive domain adaptation algorithms. We set the newly generated dataset as the new source domain and adapt it to the real Amazon domain with DAN . As showed in Table 3, this boosts the performance from 49.63% (49.27%) for DAN trained on real (CAD-synthetic) domain to 51.93%.
Table 4 shows some results for which the classifier trained on CAD-synthetic images fails to predict the correct labels while our model predicts the right ones, as well as some representative mistakes. The results show the potential to generate better training data for a large variety of object categories. An interesting example is the “desk lamp” with a toy bike in the middle, causing both models to mistake it for a bike.
Generating large-scale training examples from 3D CAD models is a promising alternative to expensive data annotation. However, the domain discrepancy between CAD-synthetic images and real images severely undermines the performance of deep learning models on real world applications.
In this work, we have proposed and implemented a Deep Generative Correlation Alignment Network to adapt the CAD-synthetic domain to realistic domains by generating images with more natural feature statistics. We demonstrated that leveraging loss to preserve the content of the CAD models in feature space and applying the second-order CORAL loss to diminish the domain discrepancy are effective in synthesizing adapted training images. We empirically and experimentally show that DGCAN-synthetic images are more suitable for training deep CNNs than CAD-synthetic ones. An extensive evaluation on standard benchmarks demonstrates the feasibility and effectiveness of the proposed approach against previous methods. We believe our model can be generalized to other generic tasks such as pose estimation, saliency detection and robotic grasping.
-  Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
-  K. M. Borgwardt, A. Gretton, M. J. Rasch, H.-P. Kriegel, B. Schölkopf, and A. J. Smola. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49–e57, 2006.
-  K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. arXiv preprint arXiv:1612.05424, 2016.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei.
Imagenet: A large-scale hierarchical image database.
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009.
-  A. Dosovitskiy, J. Tobias Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
-  M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html.
-  B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In Proc. ICCV, 2013.
-  L. A. Gatys, M. Bethge, A. Hertzmann, and E. Shechtman. Preserving color in neural artistic style transfer. arXiv preprint arXiv:1606.05897, 2016.
-  L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv preprint arXiv:1311.2524, 2013.
-  B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2066–2073. IEEE, 2012.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
-  R. Gopalan, R. Li, and R. Chellappa. Domain adaptation for object recognition: An unsupervised approach. In Proc. ICCV, 2011.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
-  P. Isola, J. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. CoRR, abs/1611.07004, 2016.
-  Y. Jia. Caffe: An open source convolutional architecture for fast feature embedding. http://caffe.berkeleyvision.org/, 2013.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
-  J. Liebelt and C. Schmid. Multi-view object class detection with a 3d geometric model. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 1688–1695. IEEE, 2010.
-  M. Liu and O. Tuzel. Coupled generative adversarial networks. CoRR, abs/1606.07536, 2016.
-  M. Long and J. Wang. Learning transferable features with deep adaptation networks. CoRR, abs/1502.02791, 1:2, 2015.
-  M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu. Transfer joint matching for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1410–1417, 2014.
-  M. Long, H. Zhu, J. Wang, and M. I. Jordan. Unsupervised domain adaptation with residual transfer networks. In Advances in Neural Information Processing Systems, pages 136–144, 2016.
L. v. d. Maaten and G. Hinton.
Visualizing data using t-sne.
Journal of Machine Learning Research, 9(Nov):2579–2605, 2008.
-  A. Mahendran and A. Vedaldi. Understanding Deep Image Representations by Inverting Them. ArXiv e-prints, Nov. 2014.
-  A. Mahendran and A. Vedaldi. Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision, pages 1–23, 2016.
-  F. Massa, B. C. Russell, and M. Aubry. Deep exemplar 2d-3d detection by adapting from real to rendered views. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6024–6033, 2016.
-  M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
-  R. Nevatia and T. O. Binford. Description and recognition of curved objects. Artificial Intelligence, 8(1):77 – 98, 1977.
-  A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. arXiv preprint arXiv:1605.09304, 2016.
-  X. Peng, B. Sun, K. Ali, and K. Saenko. Learning deep object detectors from 3d models. In Proceedings of the IEEE International Conference on Computer Vision, pages 1278–1286, 2015.
-  S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playing for data: Ground truth from computer games. In European Conference on Computer Vision, pages 102–118. Springer, 2016.
-  K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In Computer Vision–ECCV 2010, pages 213–226. Springer, 2010.
-  D. Sejdinovic, B. Sriperumbudur, A. Gretton, K. Fukumizu, et al. Equivalence of distance-based and rkhs-based statistics in hypothesis testing. The Annals of Statistics, 41(5):2263–2291, 2013.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
-  M. Stark, M. Goesele, and B. Schiele. Back to the future: Learning shape models from 3d cad data. In BMVC, volume 2, page 5, 2010.
-  H. Su, C. R. Qi, Y. Li, and L. J. Guibas. Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views. In Proceedings of the IEEE International Conference on Computer Vision, pages 2686–2694, 2015.
-  B. Sun, J. Feng, and K. Saenko. Return of frustratingly easy domain adaptation. arXiv preprint arXiv:1511.05547, 2015.
-  B. Sun and K. Saenko. Deep CORAL: correlation alignment for deep domain adaptation. CoRR, abs/1607.01719, 2016.
-  M. Sun, H. Su, S. Savarese, and L. Fei-Fei. A multi-view probabilistic model for 3d object classes. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1247–1254. IEEE, 2009.
-  E. Tzeng, C. Devin, J. Hoffman, C. Finn, X. Peng, S. Levine, K. Saenko, and T. Darrell. Towards adapting deep visuomotor representations from simulated to real environments. arXiv preprint arXiv:1511.07111, 2015.
-  E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014, pages 818–833. Springer, 2014.