The recent unprecedented advances in computer vision and machine learning are mainly due to: 1) deep (convolutional) neural architectures, and 2) existence of abundant labeled data. Deep convolutional neural networks (CNNs)[15, 9, 11] trained on large numbers of labeled images (tens of thousands to millions) provide powerful image representations that can be used for a wide variety of tasks including recognition, detection, and segmentation. On the other hand, obtaining abundant annotated data remains a cumbersome and expensive process in the majority of applications. Hence, there is a need for transferring the learned knowledge from a source domain with abundant labeled data to a target domain where data is unlabeled or sparsely labeled. The major challenge for such knowledge transfer is a phenomenon known as domain shift , which refers to the different distribution of data in the target domain compared to the source domain.
To further motivate the problem, consider the emerging application of autonomous driving where a semantic segmentation network is required to be trained to detect roads, cars, pedestrians, etc. Training such segmentation networks requires semantic, instance-wise, dense pixel annotations for each scene, which is excruciatingly expensive and time consuming to acquire. To avoid human annotations, a large body of work focuses on designing photo-realistic simulated scenarios in which the ground truth annotations are readily available. Synthia , Virtual KITTI , and GTA5  datasets are examples of such simulations, which include a large number of synthetically generated driving scenes together with ground truth pixel-level semantic annotations. Training a CNN based on such synthetic data and applying it to real-world images (i.e. from a dashboard mounted camera), such as the Cityscapes dataset , will give very poor performance due to the large differences in image characteristics which gives rise to the domain shift problem. Figure 1 demonstrates this scenario where a network is trained on the GTA5 dataset , which is a synthetic dataset, for semantic segmentation and is tested on the Cityscapes dataset . It can be seen that with no adaptation the network struggles with segmentation (Figure 1, C), while our proposed framework ameliorates the domain shift problem and provides a more accurate semantic segmentation.
Domain adaptation techniques aim to address the domain shift problem, by finding a mapping from the source data distribution to the target distribution. Alternatively, both domains could be mapped into a shared domain where the distributions are aligned. Generally, such mappings are not unique and there exist many mappings that align the source and target distributions. Therefore various constraints are needed to narrow down the space of feasible mappings. Recent domain adaptation techniques parameterize and learn these mappings via deep neural networks [27, 18, 28, 20].In this paper, we propose a unifying, generic, and systematic framework for unsupervised domain adaptation, which is broadly applicable to many image understanding and sensing tasks where training labels are not available in the target domain. We further demonstrate that many existing methods for domain adaptation arise as special cases of our framework.
While there are significant differences between the recently developed domain adaptation methods, a common and unifying theme among these methods can be observed. We identify three main attributes needed to achieve successful unsupervised domain adaptation: 1) domain agnostic feature extraction, 2) domain specific reconstruction, and 3) cycle consistency. The first requires that the distributions of features extracted from both domains are indistinguishable (as judged by an adversarial discriminator network). This idea was utilized in many prior methods [10, 3, 4], but alone does not give a strong enough constraint for domain adaptation knowledge transfer, as there exist many mappings that could match the source and target distributions in the shared space. The second is requiring that the features are able to be decoded back to the source and target domains. This idea was used in Ghifary et al.  for unsupervised domain adaptation. Finally, the cycle consistency is needed for unpaired source and target domains to ensure that the mappings are learned correctly and they are well-behaved, in the sense that they do not collapse the distributions into single modes . Figure 2 provides a high-level overview of our framework.
The interplay between the ‘domain agnostic feature extraction’, ‘domain specific reconstruction with cycle consistency’, and ‘label prediction from agnostic features’ enables our framework to simultaneously learn from the source domain and adapt to the target domain. By combining all these different components into a single unified framework we build a systematic framework for domain knowledge transfer that provides an elegant theoretical explanation as well as improved experimental results. We demonstrate the superior performance of our proposed framework for segmentation adaptation from synthetic images to real world images (See Figure 1
as an example), as well as for classifier adaptation on three digit datasets. Furthermore, we show that many of the State Of the Art (SOA) methods can be viewed as special cases of our proposed framework.
2 Related Work
There has been a plethora of recent work in the field of visual domain adaptation addressing the domain shift problem , otherwise known as the dataset bias problem. The majority of recent work use deep convolutional architectures to map the source and target domains into a shared space where the domains are aligned [29, 27, 28, 10]
.These methods widely differ on the architectures as well as the choices of loss functions used for training them. Some have used Maximum Mean Discrepancy (MMD) between the distributions of the source and target domains in the shared space, while others have used correlation maximization to align the second-order statistics of the domains. Another popular and effective choice is maximizing the confusion rate of an adversarial network, that is required to distinguish the source and target domains in the shared space [27, 10, 3, 4, 5]. Other approaches include the work by Sener et al. 
, where the domain transfer is formulated in a transductive setting, and the Residual Transfer Learning (RTL) approach where the authors assume that the source and target classifiers only differ by a residual function and learn these residual functions.
Our work is primarily motivated by the work of Hoffman et al. , Isola et al. , Zhu et al. , and Ghifary et al. . Hoffman et al.  utilized fully convolutional networks with domain adversarial training to obtain domain agnostic features (i.e. shared space) for the source and target domains, while constraining the shared space to be discriminative for the source domain. Hence, by learning the mappings from source and target domains to the shared space (i.e. and in Figure 2), and learning the mapping from the shared space to annotations (i.e. in Figure 2), their approach effectively enables the learned classifier to be applicable to both domains. The Deep Reconstruction Classification Network (DRCN) of Ghifary et al. , utilizes a similar approach but with a constraint that the embedding must be decodable, and learns a mapping from the embedding space to the target domain (i.e. in Figure 2). The image-to-image translation work by Isola et al.  maps the source domain to the target domain by an adversarial learning of and and composing them . In their framework the target and source images were assumed to be paired, in the sense that for each source image there exists a known corresponding target image. This assumption was lifted in the follow-up work of Zhu et al. , where cycle consistency was used to learn the mappings based on unpaired source and target images. While the approaches of Isola et al.  and Zhu et al.  do not address the domain adaptation problem, yet they provide a baseline for learning high quality mappings from a visual domain into another.
The patterns that collectively emerge from the mentioned papers [29, 10, 13, 5, 31], are: a) the shared space must be a discriminative embedding for the source domain, b) the embedding must be domain agnostic, hence maximizing the similarity between the distributions of embedded source and target images, c) the information preserved in the embedding must be sufficient for reconstructing domain specific images, d) adversarial learning as opposed to the classic losses can significantly enhance the quality of learned mappings, e) cycle-consistency is required to reduce the space of possible mappings and ensure their quality, when learning the mappings from unpaired images in the source and target domains. Our proposed method for unsupervised domain adaptation unifies the above-mentioned pieces into a generic framework that simultaneously solves the domain adaptation and image-to-image translation problems.
There have been other recent efforts toward a unifying and general framework for deep domain adaptation. The Adversarial Discriminative Domain Adaptation (ADDA) work by Tzeng et al.  is an instance of such frameworks. Tzeng et al.  identify three design choices for a deep domain adaptation system, namely a) whether to use a generative or discriminative base, whether to share mapping parameters between and , and the choice of adversarial training. They observed that modeling image distributions might not be strictly necessary if the embedding is domain agnostic (i.e. domain invariant).
Consider training images and their corresponding annotations/labels from the source domain (i.e. domain ). Note that may be image level such as in classification or pixel level in the case of semantic segmentation. Also consider training images in the target domain (i.e. domain ), where we do not have corresponding annotations for these images. Our goal is then to learn a classifier that maps the target images, s, to labels
. We note that the framework is readily extensible to a semi-supervised learning or few-shot learning scenario where we have annotations for a few images in the target domain. Given that the target domain lacks labels, the general approach is to learn a classifier on the source domain and adapt it in a way that its domain distribution matches that of the target domain.
The overarching idea here is to find a joint latent space, , for the source and target domains, and , where the representations are domain agnostic. To clarify this point, consider the scenario in which is the domain of driving scenes/images on a sunny day and is the domain of driving scenes on a rainy day. While ‘sunny’ and ‘rainy’ are characteristics of the source and target domains, they are truly nuisance variations with respect to the annotation/classification task (e.g. semantic segmentation of the road), as they should not affect the annotations. Treating such characteristics as structured noise, we would like to find a latent space, , that is invariant to such variations. In other words, domain should not contain domain specific characteristics, hence it should be domain agnostic. In what follows we describe the process that leads to finding such a domain agnostic latent space.
Let the mappings from source and target domains to the latent space be defined as and , respectively (See Figure 2). In our framework these mappings are parameterized by deep convolutional neural networks (CNNs). Note that the members of the latent space
are high dimensional vectors in the case of image level tasks, or feature maps in the case of pixel level tasks. Also, letbe the classifier that maps the latent space to labels/annotations (i.e. the classifier module in Figure 3). Given that the annotations for the source class are known, one can define a supervised loss function to enforce :
where is an appropriate loss (e.g. cross entropy for classification and segmentation). Minimizing the above loss function leads to the standard approach of supervised learning, which does not concern domain adaptation. While this approach would lead to a method that performs well on the images in the source domain, , it will more often than not perform poorly on images from the target domain . The reason is that, domain is biased to the distribution of the structured noise (‘sunny’) in domain and the structured noise in domain (‘rainy’) confuses the classifier . To avoid such confusion we require the latent space, , to be domain agnostic, so it is not sensitive to the domain specific structured noise. To achieve such a latent space we systematically introduce a variety of auxiliary networks and losses to help regularize the latent space and consequently achieve a robust . The auxiliary networks and loss pathways are depicted in Figure 3. In what follows we describe the individual components of the regularization losses.
First of all is required to preserve the core information of the target and source images and only discard the structured noise. To impose this constraint on the latent space, we first define decoders and that take the features in the latent space to the source and target domains, respectively. We assume that if retains the crucial/core information of the domains and only discards the structured noise, then the decoders should be able to add the structured noise back and reconstruct each image from their representation in the latent feature space, . In other words, we require and to be close to identity functions/maps. This constraint leads to the following loss function:
where is a pixel-wise image loss such as the norm.
We would like the latent space to be domain agnostic. This means that the feature representations of the source and target domain should not contain domain specific information. To achieve this, we use an adversarial setting in which a discriminator tries to classify if a feature in the latent space was generated from domain or , where and are binary domain labels (i.e. from domain X or domain Y). The loss function then can be defined as the certainty of the discriminator (i.e. domain agnosticism is equivalent to fooling the discriminator), and therefore we can formulate this as:
where is an appropriate loss (the cross entropy loss in traditional GANs  and mean square error in least squares GAN ). The discriminator is trained to maximize this loss while the discriminator is trained to minimize it.
To further ensure that the mappings , , , and are consistent we define translation adversarial losses. An image from target (source) domain is first encoded to the latent space and then decoded to the source (target) domain to generate a ‘fake’ (translated) image. Next, we define discriminators and , to identify if an image is ‘fake’ (generated from the other domain) or ‘real’ (belonged to the actual domain). To formulate this translation loss function we can write:
Given that there are no correspondences between the images in the source and target domains, we need to ensure that the semantically similar images in both domains are projected into close vicinity of one another in the latent space. To ensure this, we define the cycle consistency losses where the ‘fake’ images generated in the translation loss, or , are encoded back to the latent space and then decoded back to their original space. The entire cycle should be equivalent to an identity mapping. We can formulate this loss as follows:
To further constrain the translations to maintain the same semantics, and allow the target encoder to be trained with supervision on target domain ‘like’ images we also define a classification loss between the source to target translations and the original source labels:
Finally, by combining these individual losses we define the general loss to be,
The above general loss function is then optimized via Stochastic Gradient Descent (SGD) method with adaptive learning rate, in an end-to-end manner. Figure3 shows the pathways for each loss function defined above. The discriminative networks, , , and are trained in an alternating optimization alongside with the encoders and decoders.
To further constrain the features that are learned we share the weights of the encoders. We also share the weights of the first few layers of the decoders. To stabilize the image domain discriminators we train them using the Improved Wasserstein method . We found that the Wasserstein GAN is not well suited for discriminating in the Z domain since both the ‘real’ and ‘fake’ distributions are changing. As such we resort to using the least squares GAN  for the Z domain.
Here we will show how various previous methods for domain adaptation are special cases of our method. By setting we recover . By first training only on the source domain and then freezing the source encoder, untying the target encoder and setting we recover . By setting we recover , where indicates the mixing coefficient only for the first term of . Finally, by setting we recover . Table 1 summarizes these results.
4.1 MNIST, USPS, and SVHN digits datasets
First, we demonstrate our method on domain adaptation between three digit classification datasets, namely MNIST , USPS , and the Street View House Numbers (SVHN)  datasets. The MNIST dataset consists of 60,000 training and 10,000 test binary images of handwritten digits of size . The USPS dataset contains 7291 training and 2007 test grayscale images of handwritten images of size . The SVHN dataset, which is a significantly more challenging dataset, contains 73,257 training and 26,032 digits test RGB images of size . We performed the same experiments as in [4, 27, 17, 28] where we treated one of the digit datasets as a labeled source domain and another dataset as unlabeled target domain. We trained our framework for adaptation from MNIST USPS, USPS MNIST, and SVHN MNIST. Figure 4 shows examples of MNIST to SVHN input and translated images.
For a fair comparison with previous methods, our feature extractor network (encoder, and ) is a modified version of LeNet . Our decoders (i.e. and
) consist of three transposed convolutional layers with batch normalization and leaky ReLU nonlinearities. Our image discriminators consist of three convolutional layers and our feature discriminator consists of three fully connected layers. We also experimented with a deeper DenseNet architecture for the encoder which improved performance for all methods (in fact DenseNet without any domain adaptation beat almost all prior methods that include domain adaptation). We compare our method to five prior works (see Table. 2). Our method consistently out performs prior work, and when combined with the DenseNet architecture, significantly outperforms the prior SOA.
Figure 4 A,B,C show TSNE embeddings of the features extracted from the source and target domain when trained without adaptation, with image to image loss only, and our full model. It can be seen that without adaptation, the source and target images get clustered in the feature space but the distributions do not overlap which is why classification fails on the target domain. Just image to image translation is not enough to force the distributions to overlap as the networks learn to map source and target distributions to different areas of the feature space. Our full model includes a feature distribution adversarial loss, forcing the source and target distributions to overlap, while image translation makes the features richer yielding the best adaptation results.
|Method||MNIST USPS||USPS MNIST||SVHN MNIST|
|Gradient reversal ||77.1||73.0||73.9|
|Domain confusion ||79.1||66.5||68.1|
|I2I Adapt (Ours)||92.1||87.2||80.3|
|Source only - DenseNet||95.0||88.1||80.1|
|I2I Adapt - DenseNet (Ours)||95.1||92.2||92.1|
4.2 Office dataset
The Office dataset  consists of images from 31 classes of objects in three domains: Amazon (A), Webcam (W) and DSLR (D) with 2817, 795 and 498 images respectively (see Figure 5 for examples). Our method performs the best in four out of six of the tasks (see Table 3). The two tasks that ours is not best at consist of bridging a large domain shift with very little training data in the source domain (795 and 498 respectively).
|Method||A W||W A||A D||D A||W D||D W|
|Domain confusion ||61.8||52.2||64.4||21.1||98.5||95.0|
|Transferable Features ||68.5||53.1||67.0||54.0||99.0||96.0|
|Gradient reversal ||72.6||52.7||67.1||54.5||99.2||96.4|
|I2I Adapt (Ours)||75.3||52.1||71.1||50.1||99.6||96.5|
4.3 GTA5 to Cityscapes
We also demonstrate our method for domain adaptation between the synthetic (photorealistic) driving dataset GTA5  and the real dataset Cityscapes . The GTA5 dataset consists of 24,966 densely labeled RGB images (video frames) of size , containing 19 classes that are compatible with the Cityscapes dataset (See Table 4). The Cityscapes dataset contains 5,000 densely labeled RGB images of size from 27 different cities. Here the task is pixel level semantic segmentation. Following the experiment in , we use the GTA5 images as the labeled source dataset and the Cityscapes images as the unlabeled target domain.
We point out that the convolutional networks in our model are interchangeable. We include results using a dilated ResNet encoder for fair comparison with previous work, but we found from our experiments that the best performance was achieved by using our new Dilated Densely-Connected Networks (i.e. Dilated DenseNets) for the encoders which are derived by replacing strided convolutions with dilated convolutions in the DenseNet architecture . DenseNets have previously been used for image segmentation  but their encoder/decoder structure is more cumbersome than what we proposed. We use a series of transposed convolutional layers for the decoders. For the discriminators we follow the work by by Zhu et al.  and use a few convolutional layers.
Due to computational and memory constraints, we down sample all images by a factor of two prior to feeding them into the networks. Output segmentations are bilinearly up sampled to the original resolution. We train our network on 256x256 patches of the down sampled images, but test on the full images convolutionally. Furthermore, we did not include the cycle consistency constraint as that would require an additional pass through the encoder and decoder for both source and target images. Although cycle consistency regularizes the mappings more, we found that the identity and translation losses alone are enough in this case due to our shared latent space.
Our encoder architecture (dilated ResNet/DenseNet) is optimized for segmentation and thus it is not surprising that our translations (see Figure. 6) are not quite as good as those reported in . Qualitatively, it can be seen from Figure 6 that our segmentations are much cleaner compared to no adaptation. Quantitatively (see Table 4), our method outperforms the previous method  on all categories except 3, and is better overall. Further more, we show that using Dilated DenseNets in our framework, increases the SOA by .
|FCNs in the Wild ||67.4||29.2||64.9||15.6||8.4||12.4||9.8||2.7||74.1||12.8||66.8||38.1||2.3||63.0||9.4||5.1||0.0||3.5||0.0||27.1|
|I2I Adapt (Ours)||85.3||38.0||71.3||18.6||16.0||18.7||12.0||4.5||72.0||43.4||63.7||43.1||3.3||76.7||14.4||12.8||0.3||9.8||0.6||31.8|
|Source only - DenseNet||67.3||23.1||69.4||13.9||14.4||21.6||19.2||12.4||78.7||24.5||74.8||49.3||3.7||54.1||8.7||5.3||2.6||6.2||1.9||29.0|
|I2I Adapt - DenseNet (Ours)||85.8||37.5||80.2||23.3||16.1||23.0||14.5||9.8||79.2||36.5||76.4||53.4||7.4||82.8||19.1||15.7||2.8||13.4||1.7||35.7|
5 Implementation Details
What follows are further details about our network architectures, hyperparameters, and training procedures. We also plan to release our code with the conference version of the paper.
5.1 MNIST, USPS, and SVHN digits datasets
All images from MNIST and USPS were bilinearly upsampled to 32x32. Images from SVHN were converted to gray scale. All images were normalized to .
Our modified LeNet encoder consists of 4 stride 2 convolutional layers with 4x4 filters and 64, 64, 128, 128 features respectively. Each convolution is followed by batch normalization and a ReLU nonlinearity. Batch normalization was helpful with the image to image translation training. All weights are shared between the source and target encoders. Our DenseNet encoder follows  with the final fully connected layer removed.
Our decoder consists of 4 stride 2 transposed convolutional layers with 4x4 filters and 512, 256, 128, 1 features respectively. Each convolution is followed by batch normalization and a ReLU nonlinearity except the last layer which only has a Tanh nonlinearity. The weights of the first two layers are shared between the source and target decoders.
The feature discriminator consists of 3 linear layers with 500, 500, 1 features, each followed by a leaky ReLU nonlinearity with slope . The feature discriminator is trained with the Least Squares GAN loss. The Loss is only backpropagted to the generator for target images (we want the encoder to learn to map the target images to the same distribution as the source images, not vice versa).
The image discriminators consist of 4 stride 2 convolutional layers with 4x4 filters and 64, 128, 256, 1 features respectively. Each convolution is followed by instance normalization and a leaky ReLU nonlinearity with slope . The image discriminators are trained with the Improved Waserstien loss with a gradient penalty of .
For our hyperparameters we used: , , , , , . The networks are trained using the ADAM optimizer with learning rate and betas and .
The translation classification loss () does not help with these simple digits datasets because the decoders can easily learn a permutation of the digits (for example a 2 may be translated to an 8 and then translated back to a 2).
5.2 Office dataset
Images are down sampled to 256x256 and then a random crop of size 224x244 is extracted.
For our encoder we use a ResNet34 pretrained on ImageNet. We do not use any dilation and thus have an output stride of 32. The final classification layer is applied after global average pooling.
Our decoders consist of a 5 4x4 stride 2 transposed convolutional layers with feature dimension 512, 256, 128, 64, 3. Each convolution is followed by batch normalization and a ReLU nonlinearity except the last layer which only has a Tanh nonlinearity. The weights of the first two layers are shared between the source and target decoders. We use the same image discriminator as in GTA5 to Cityscapes. Here we found that using the Least Squares GAN loss produced better results.
The feature discriminator consists of 3 1x1 convolution layers with 500, 500, 1 features, each followed by a leaky ReLU nonlinearity with slope .
Our hyperparameters were: , , , , , . The networks are trained using the ADAM optimizer with learning rate and betas and . However, the pretrained encoder is trained with a learning rate of , to keep the weights closer to their good initialization.
5.3 GTA5 to Cityscapes
During training we use 512x512 crops which are down sampled by a factor of two prior to feeding them into the nets. Segmentation results are bilinearly up sampled to the full resolution. At test time we compute segmentations of the entire image convolutionally.
For our encoders we use a dilated ResNet34 and a dilated DenseNet121. We dilate the final two layers (blocks) so that the networks have an output stride of 8. We initialize the weights using a pretrained ImageNet classifier. Following  we also add a dilation 2 followed by dilation 1 convolution layer to the end of the network to remove checkerboarding artifacts (whose weights are randomly initialized).
Our decoders consist of a 3x3 stride 1 convolutional layer followed by 3 4x4 stride 2 transposed convolutional layers with feature dimension 512, 256, 128, 3. Each convolution is followed by batch normalization and a ReLU nonlinearity except the last layer which only has a Tanh nonlinearity. The weights of the first two layers are shared between the source and target decoders.
The image discriminators consist of 4 stride 2 convolutional layers with 4x4 filters and 64, 128, 256, 1 features respectively. Each convolution is followed by instance normalization and a leaky ReLU nonlinearity with slope . The image discriminators are trained with the Improved Waserstien loss with a gradient penalty of . We did not use the feature discriminator for this experiment.
Although cycle consistency provides further regularization, it is computationally expensive for large images. We found that the identity and translation losses were enough to constrain the feature space.
Our hyperparameters were: , , , , , . The networks are trained using the ADAM optimizer with learning rate and betas and .
The translated classification loss (
) is only backpropagated through the second encoding step (). This prevents and from cheating and hiding information in the translated images to help .
We have proposed a general framework for unsupervised domain adaptation which encompasses many recent works as special cases. Our proposed method simultaneously achieves image to image translation, source discrimination, and domain adaptation.
Our implementation outperforms state of the art on adaptation for digit classification and semantic segmentation of driving scenes. When combined with the DenseNet architecture our method significantly outperforms the current state of the art.
We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan X Pascal GPU used for this research.
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele.
The cityscapes dataset for semantic urban scene understanding.In
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  A. Gaidon, Q. Wang, Y. Cabon, and E. Vig. Virtual worlds as proxy for multi-object tracking analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4340–4349, 2016.
-  Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning, pages 1180–1189, 2015.
-  Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016.
-  M. Ghifary, W. B. Kleijn, M. Zhang, D. Balduzzi, and W. Li. Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision, pages 597–613. Springer, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
-  A. Gretton, A. J. Smola, J. Huang, M. Schmittfull, K. M. Borgwardt, and B. Schölkopf. Covariate shift by kernel mean matching. 2009.
-  I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  J. Hoffman, D. Wang, F. Yu, and T. Darrell. Fcns in the wild: Pixel-level adversarial and constraint-based adaptation. arXiv preprint arXiv:1612.02649, 2016.
-  G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016.
-  J. J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550–554, 1994.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016.
-  S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pages 1175–1183. IEEE, 2017.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in neural information processing systems, pages 469–477, 2016.
-  M. Long, Y. Cao, J. Wang, and M. Jordan. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning, pages 97–105, 2015.
-  M. Long, H. Zhu, J. Wang, and M. I. Jordan. Unsupervised domain adaptation with residual transfer networks. In Advances in Neural Information Processing Systems, pages 136–144, 2016.
-  Z. Luo, Y. Zou, J. Hoffman, and L. Fei-Fei. Label efficient learning of transferable representations across domains and tasks. In Conference on Neural Information Processing Systems (NIPS), 2017.
-  X. Mao, Q. Li, H. Xie, R. Y. Lau, and Z. Wang. Multi-class generative adversarial networks with the l2 loss function. arXiv preprint arXiv:1611.04076, 2016.
Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng.
Reading digits in natural images with unsupervised feature learning.
NIPS workshop on deep learning and unsupervised feature learning, volume 2011, page 5, 2011.
-  S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playing for data: Ground truth from computer games. In European Conference on Computer Vision, pages 102–118. Springer, 2016.
-  G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. Lopez. The SYNTHIA Dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In CVPR, 2016.
-  K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. Computer Vision–ECCV 2010, pages 213–226, 2010.
-  O. Sener, H. O. Song, A. Saxena, and S. Savarese. Learning transferrable representations for unsupervised domain adaptation. In Advances in Neural Information Processing Systems, pages 2110–2118, 2016.
-  E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko. Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE International Conference on Computer Vision, pages 4068–4076, 2015.
-  E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation. arXiv preprint arXiv:1702.05464, 2017.
-  E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
-  F. Yu, V. Koltun, and T. Funkhouser. Dilated residual networks. arXiv preprint arXiv:1705.09914, 2017.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.