Deep Visual Domain Adaptation

12/28/2020 ∙ by Gabriela Csurka, et al. ∙ NAVER LABS Corp. 3

Domain adaptation (DA) aims at improving the performance of a model on target domains by transferring the knowledge contained in different but related source domains. With recent advances in deep learning models which are extremely data hungry, the interest for visual DA has significantly increased in the last decade and the number of related work in the field exploded. The aim of this paper, therefore, is to give a comprehensive overview of deep domain adaptation methods for computer vision applications. First, we detail and compared different possible ways of exploiting deep architectures for domain adaptation. Then, we propose an overview of recent trends in deep visual DA. Finally, we mention a few improvement strategies, orthogonal to these methods, that can be applied to these models. While we mainly focus on image classification, we give pointers to papers that extend these ideas for other applications such as semantic segmentation, object detection, person re-identifications, and others.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

While recent advances in deep learning yielded a significant boost in performance in most computer vision tasks, this success depends a lot on the availability of a large amount of well-annotated training data. As the cost of acquiring data labels remains high, amongst alternative solutions, domain adaptation approaches have been proposed, where the main idea is to exploit the unlabeled data within the same domain together with annotated data from a different yet related domain. Yet, because learning from the new domain might suffer from distribution mismatch between the two domains, it is necessary to adapt the model learned on the labelled source to the actual target domain as pictured in Fig. 1.

With the recent progress on deep learning, a significant performance boost over previous state-of-the art of visual categorization systems was observed. In parallel, it was shown that features extracted from the activation layers of these deep networks can be re-purposed for novel tasks or domains 

[1]

even when the new task/domain differs from the task/domain originally used to train the model. This is because deep neural networks learn more abstract and more robust representations, they encode category level information and remove, to a certain measure, the domain bias

[2, 3]. Hence, these representations are more transferable to new tasks/domains because they disentangle the factors of variations in underlying data samples while grouping them hierarchically according to their relatedness with invariant factors.

These image representations, in general obtained by training the model in a fully supervised manner on large-scale annotated datasets, in particular ImageNet 

[4]

, can therefore be directly used to build stronger baselines for domain adaptation methods. Indeed, by simply training a linear classifier with such representations obtained from activation layers

[1], and with no further adaptation to the target set, yields in general significantly better results than most shallow DA models trained with previously used handcrafted, generally bag of visual words (BOV) [5], representations. In Fig. 2 we illustrate this using the AlexNet architecture [6], however representations obtained with deeper models [7, 8, 9] provide even better performance and generalization capacity [10].

Fig. 1:

Domain adaptation is a machine learning technique where knowledge from a labeled source domain is leveraged to learn a model for an unlabeled target domain. It is assumed that there is a distribution mismatch between domains but the task (

e.g. class labels) is shared between domains.

While using directly these models trained on the source provides already relatively good results on the target datasets, especially when the domain shift is moderate, for more challenging problems, e.g. adaptation between images and paintings, drawings, clip art or sketches [11, 12, 10]

, a classifier trained even with such deep features would have difficulties to handle the domain differences. Therefore, the need for alternative solutions that directly handle the domain shift remains the preferred solution.

Therefore, in which follows we first discuss and compare different strategies about how to exploit deep architectures for domain adaptation. Then, we provide an overview of recent trends in deep visual domain adaptation. Finally, we evoke a few strategies, orthogonal to the deep DA architecture design, that can be applied to improve those models.

Fig. 2: Left: Results show that nearest neighbor (NN) classifier results with AlexNet [6] without any adaptation on the Office+Caltech [15] dataset outperform by a large margin classical shallow DA methods using the SURF-BOV features originally provided with these datasets. Right: we show Amazon (A) and Webcam (W) data from the Office 31 [14] benchmark set clustered together with SURF-BOV and AlexNet features. We can see that the two domains are much better clustered with deep features then with SURF-BOV.

Ii Deep learning strategies

There are several ways to exploit deep models to handle the domain mismatch between the source and the target set, that can be grouped in four main categories: 1) shallow methods using deep features, 2) using fine-tuned deep architectures, 3) shallow methods using fine-tuned deep features and 4) deep domain adaptation models.

Shallow DA methods using deep features. We mentioned above that considering a pre-trained deep model as feature extractor to represent the images and train a classifier on the source provides already a strong baseline. However, we can go a step further by incorporating these representations into traditional DA methods such as [15, 16, 18, 17, 19, 20]. As shown in  [1, 21, 22, 10], to cite a few examples, using such DA methods with deep features yields further performance improvement on the target data. Nevertheless, it was observed that the contribution of using deep features is much more significant than the contribution of using various DA methods. Indeed, as Fig. 2) illustrates the gain obtained with any DA on the BOV baseline is low compared to the gain between BOV versus deep features both for the baseline or any DA method.

Training deep architectures on the source. The second solution is to train or fine-tune a deep network on the source domain and use directly the model to predict the class labels for the target instances. While, in this case there is no adaptation to the target, as illustrated also in Fig. 3

, we observe not only better performance (or equally if ImageNet is the source) compared with the baseline (classifier trained with the features from backbone pretrained on ImageNet), but also with the previous strategy (shallow DA applied with the corresponding image representations). The explanation is that the deep model disregards in certain measure the appearance variation by focusing on high level semantics, and therefore is able to overcome in certain measure the domain gap. However, if the domain difference between the source and target is important, fine-tuning the model on the source can also overfit the model for the source

[23, 22] and therefore it is important to correctly select the layers to be fine-tuned [24, 10].

Fig. 3: We compare several strategies on the LandMarkDA dataset [10] using shallow (SDAN) and deep (DDAN) discrepancy-based networks [10] built with GoogleNet [9] as backbone. No adaptation (NA) means that only the classifier layer was trained, contrary to fine-tuning the model on the source (FT). SDAN is trained with deep features from the ImageNet pre-trained network (SDAN) or from the fine-tuned network (FT+SDAN). We can see that FT+SDAN yields results close to DDAN, which performs the best.
Fig. 4: Left: classical DA methods where the image representations are fixed and the domain alignment and source classifier are learned in this feature space. Right: deep DA architecture where image representations, source classifier and domain alignment are all learned jointly in an end-to-end manner. The parameters of the source and target models can be partially or fully shared.

Shallow methods using fine-tuned deep features. Note that the above mentioned two strategies are orthogonal and they can be combined to take advantage of both. This is done by first fine-tuning the model on the source set and then the features extracted with this model are used by the shallow DA method to decrease the discrepancy between source and target distributions. In addition to further boosting the performance (see Fig. 3), further advantages of this strategy are the fact that it does not require tailoring the network architecture for DA, and the fine-tuning on the source can be done in advance, even before seeing the target set.

In Fig. 3

we compare these strategies with a corresponding shallow (single layer perceptron on top of the pre-extracted features) and a deep end-to-end architecture where we use the same discrepancy (kernelized MMD  

[25, 26] and cross-entropy loss. We can see that using a shallow method with deep features extracted from the fine-tuned model indeed combines the advantages of the fine-tuning with domain adaptation and yields results close to the deep Siamese discriminative network designed for the domain adaptation. Similar behaviour was observed in when comparing DeepCORAL [27] with CORAL [22] using features extracted from the pre-trained and fine-tuned network. Note nevertheless that in both cases a relatively simple deep DA method was considered, and as will be discussed in the next sections, these deep models can be further improved in various ways.

Iii Deep DA Models

Historical shallow DA methods include data re-weighting, metric learning, subspace representations or distribution matching (see for more details the surveys [28, 29]). As discussed above, these methods assume that the image representations are fixed (they are handcrafted or pre-extracted from a deep model) and the adaptation model uses these features as input (see left image in Fig. 4). Amongst the most popular shallow DA approaches, a set of methods focuses on aligning the marginal distributions of the source and the target sets. These methods learn either a linear projection or more complex feature transformations with the aim that in the new space the discrepancy between the domains is significantly decreased. Then the classifier trained on the labeled source set in the projected space, thanks to the domain alignment, can directly be applied to the target set.

It is therefore not surprising that amongst the first deep DA models we find the generalization of this pipeline, as illustrated in Fig. 4(right) where the deep representation is jointly learned with the source classifier and domain alignment in an end-to-end manner. These first solutions were followed by a large amount of different deep DA methods and architectures that can be grouped together according to different criterion (see also [30]). In which follows, we recall some of the main trends.

Discriminative models. These models, inspired by classical DA methods, have a Siamese architecture [31] with two streams, one for the source set and one for the target set. The two streams can share entirely, partially or not at all the weights, and in general both branches are initialized by the corresponding backbone (e.g. VGG [7], ResNet [8] or GoogleNet [9]), trained on the source set most often using the cross-entropy classification loss. The Siamese network is then trained with the same cross-entropy loss applied only the source stream together with a domain alignment loss defined with both source and target features. This loss uses either the last activation layer before the soft-max prediction [32] or it can be applied to several activation layers [26].

Fig. 5: Domain Adaptive Faster R-CNN model [45] aiming to adapt the detector trained on the source for a new domain. The domain shift is tackled in an adversarial training manner with GRL [43] layers on two levels, the image level and the instance level. A consistency regularizer is incorporated within these two classifiers to learn a domain-invariant region proposal network (RPN). (Image Courtesy to Yuhua Chen).

The domain alignment can be achieved by minimizing the feature distribution discrepancy, or by using an adversarial loss to increase domain confusion. To minimize the distribution discrepancy, most often the Kernelized MMD loss is used [32, 26]

, but amongst the alternative losses proposed, we can mention the Central Moment Discrepancy

[33], CORAL loss [27], or Wasserstein distance [34, 35]. Note that the Wasserstein distance is used also to minimize the global transportation cost in optimal transport based DA methods [20, 36, 37], however, these are asymmetric models transporting the source data towards the target samples instead of projecting both sets into a common latent space.

On the other hand, domain confusion can be achieved either with adversarial losses such as GAN loss  [38, 39, 40] and domain confusion loss [41, 42], or by using a domain classifier and gradient reversal layer (GRL) [43, 44]

. Note however that the latter can also be formulated as a min-max loss and is achieved by the integration of a simple binary domain classifier and a GRL layer into a standard deep architecture which is unchanged during the forward pass, and reversed for the target during backpropagation. This simple but quite powerful solution became extremely popular when DA is applied for problems beyond image classification, in particular for object detection

[45, 46, 47, 48, 49] (see also Fig. 5), semantic image segmentation [50, 51] or video action recognition [52, 53].

Class-conditional distribution alignment. To overcome the drawback that aligning marginal distributions without taking into account explicitly the task might lead to sub-optimal solution, several approaches were proposed. Amongst them we have the ones that tries to align class conditional distributions by minimizing the marginals of features and class predictions jointly [54], or exploit discriminative information conveyed in the classifier predictions to assist adversarial adaptation [55]. Instead, [56] proposes to focus on the Margin Disparity Discrepancy loss defined on the scoring function and use adversarial learning to solve it. [58, 57] proposes to minimize task-specific decision boundaries’ disagreement on target examples while aligning features across domains. [59] explicitly models the intra-class and the inter-class domain discrepancy, where intra-class domain discrepancy is minimized to avoid misalignment and the inter-class domain discrepancy is maximized to enhance the model’s generalization ability. Assuming the access to at least a small set of labeled target samples, [60] proposed to align higher-order scatter statistics between domain-specific and class-specific representations.

Network parameter adaptation. The above methods in general keep the same architecture with the same weights for both source and target streams, which essentially aims to learn domain invariant features. In contrast to them, several approaches were proposed, where the goal is to specialize the streams for the respective domains by adapting the parameters of the target stream. As such, [61, 62] explicitly model the domain shift by learning meta parameters that transform the weights and biases of each layer of the network from the source stream to the target one. Instead, [63] consider a multi-stream architectures with non shared parameters where learnable gates at multiple levels allows the network to find for each domain a corresponding weighted aggregation of these parallel streams.

Fig. 6: Left: Paired image style transfer [77] where the model takes the content of the source images (first column) and the style of the target image (second column) to generate a target-like source image (third column). Note that these images inherits the label from the source while they look more like the target images. Right: Un-paired image-to-image (I2I) transfer where the model learns to synthesize directly target-like images (night, rainy, etc) for a source input and/or source-like images (day, sunny, etc) for a target image without the need of an explicit style image.

Domain specific batch normalization. [64, 65, 66]

have shown that domain specific batch normalization is equivalent to projecting the source and target feature distributions to a reference distribution through feature standardization. Hence this yields a simple yet efficient solution for minimizing the gap between domains.

[67] proposes batch nuclear-norm maximization to simultaneously enhance the discriminability and diversity of predicted scores. [68] applied domain-specific batch normalization layers in the context of graph-based predictive DA. [69] proposes the DDLSTM architecture for action recognition that performs cross-contaminated recurrent batch normalisation for both single-layer and multi-layer LSTM architectures.

Encoder–decoder reconstruction. Early deep auto-encoder frameworks proposed for DA in NLP [70]

rely on the feedforward stacked denoising autoencoders

[71] where a multi-layer neural network reconstructs the input data from partial random corruptions with backpropagation. [72] has shown that such model can be trained efficiently by marginalizing out the noise that leads to a closed form solution for the transformations between layers. [73] extended this unsupervised network to a supervised one by jointly learning the domain invariance with the cross-domain classifier while keeping the network solvable in a single forward pass.

In contrast to these models that act on the pre-extracted features, more recent reconstruction models trains the encoders/decoders end-to-end. As such, [74] combines the standard CNN for source label prediction with a deconvolutional network [75] for target data reconstruction by alternating between unsupervised and supervised training. [76] integrates both domain-specific encoders and shared encoders, and the model integrates a reconstruction loss for a shared decoder that rely on both domain specific and shared representations.

Transfer domain style. In many cases the domain shift between domains is strongly related to the image appearance change such as day to night, seasonal change, synthetic to real. Even stronger domain shift can be observed when the adaptation is aimed to be between images that exhibit different artistic style such as paintings, cartoons and sketches [11, 12, 10]. To explicitly account for such stylistic domain shifts, a set of papers proposed to use image-to-image (I2I) style transfer methods [77, 78, 79] to generate a set of target like source images. They have shown that this new set is suitable to train a model for the target set [10, 80]. The main reason why this works is that these synthesized images inherits the semantic content of the source, and hence its label, while their appearances is more similar to the target style (see examples in Figure 6(Left)). Training a model with this set not only outperforms the model trained with the original source set, but it is also easier to further adapt it to the target set [10].

Another set of methods seek to learn how to translate between domains without using paired input-output examples but instead assuming there is some underlying appearance shift between the domains (e.g. day to night, sunny to rainy, synthetic to real). For example, [81, 82, 83] train the network to synthesize target-like and/or source-like images (see Figure 6

(Right)) in general by relying on a Generative Adversarial Networks (GANs)

[38], where an adversarial loss force the model to generating fake (target-like) images to be indistinguishable from real (target) photos. A pair of GANs, each corresponding to one of the domains is considered in [84]

, where the model adapts the input noise vector to paired images that are from the two distributions and share the labels. This work was extended in

[85] with Variational Auto-Encoders (VAE), where the image reconstruction, image translation, and the cycle-reconstruction are jointly optimized. [86] proposes to learn a mapping between source and target domains using an adversarial GAN loss while imposing a cycle consistent loss, i.e. the target-like source image mapped back to source style should match the original source image. [87] combined cycle consistency between input and stylized images with task-specific semantic consistency, and extended the method to semantic segmentation (see Figure 7). Transferring the target image style to generate synthetic source images is at the core of many DA method for semantic segmentation [88, 89, 90, 91, 92]. GAN-like DA models combined with similarity preserving constraints were often used for adapting cross-domain person re-identification models [93, 94, 95].

Fig. 7: CyCADA [87], combines pixel-level and feature-level adaptation where both structural and semantic consistency is enforced. The former is ensured by an L1 penalty on the reconstruction error between the source image and the image reconstructed from the target-like source. To ensure the latter, a semantic consistency loss is used that forces the segmentation of the target-like source image to match the source predictions. (Image Courtesy to Judy Hoffman).

Iv Orthogonal improvement strategies

In addition to the specifically tailored deep DA architectures, several machine learning strategies can be used with the above models to further improve their performance. While, in some cases such methods were used the main DA solution, we discuss them here separately, as in general these ideas can be easily combined with most of the above mentioned DA models.

Pseudo-labeling the target data.

One of the most used such technique is self-supervised learning with pseudo-labeled target data, sometimes referred to as self-labeling or self-training. The underlying assumption here is that at least for a subset of target samples the labeling is correct and hence the model can rely on them to improve the model. In this way the model acts as if it was a semi-supervised DA model, except that instead of having ground-truth target labels, these labels come from a pseudo-labeling process. As not all predictions are correct, often pseudo-labeling confidence scores are computed and used to select which pseudo-labeled samples should be retained for training. Typical approaches to obtain pseudo labels are, using the softmax predictions

[96, 97], using distance to class prototypes [98, 99], clustering [59, 100], label propagation on the joint source-target nearest neighbour graph [102, 101], via augmented anchors [103], or even considering a teacher classifier, built as an implicit ensemble of source classifiers [104].

Self-supervising deep DA models with pseudo-labeled target samples is also a popular strategy used to adapt tasks beyond image classification. For example, [100] proposed several strategies to pseudo-label fashion products across datasets and use them to solve the meta-domain gap occurring between consumer and shop fashion images. [105] proposed a DA framework with online relation regularization for person re-identification that uses target pseudo labels to improve the target-domain encoder trained via a joint cross-domain labeling system. [106] used predicted labels with high confidence in a bidirectional learning framework for semantic segmentation, where the image translation model and the segmentation adaptation model are learned alternatively. [107] combines the self-supervised learning strategy with a framework where the model is disentangled into a ”things” and a ”stuffs” segmentation networks.

Curriculum learning. To minimise the impact of noisy pseudo-labels during alignment, curriculum learning-based [108] approaches have been explored. A simple and most used curriculum learning scenario in DA is to first consider the most confident target samples for the alignment and including the less confident ones at later stages of the training. Pseudo-labeling confidence scores are typically determined using the image classifiers [109, 110], similarity to neighbours [101, 102] or to class prototypes [111, 98]

. After each epoch,

[110] increases the training set with new target samples that are both highly confident and domain uninformative. To improve the confidence of pseudo-labels, [109] relies on the consensus of image transformations, whereas [96] considers the agreement between multiple classifiers. [112] proposes a weakly-supervised DA framework that alternates between quantifying the transferability of source examples based on their contributions to the target task and progressively integrating from easy to hard examples. [59] considers target clusters initialized by the source cluster centers, and assign target samples to them. At each epoch, first target elements that are far from the affiliated cluster are discarded, then the clusters with too few target samples assigned are also discarded.

Curriculum-learning based DA methods with progressively including harder and harder pseudo-labeled target data was also used for cross-domain person re-identification [113, 114, 115] and image segmentation [116, 117, 118].

Conditional entropy minimization.

Widely used to improve the performance of semi-supervised learning, conditional entropy minimization in the target domain is another way to improve decision boundaries of the model

[64, 96, 120, 55]. The Minimax Entropy loss [121] is a variant where an adversarial learning maximizes the conditional entropy of unlabeled target data with respect to the classifier and minimizes it with respect to the feature encoder. Similarly, [122] proposes an adversarial loss for entropy minimization used to bridge the domain gap between synthetic to real semantic segmentation adaptation. [109] proposes the Min-Entropy Consensus that merges both the entropy and the consistency loss into a single unified function.

Self-ensemble learning. The main idea of self-ensemble learning is to train the neural network with small perturbations such as different augmentations, using dropout and various noise while forcing the network to make consistent predictions for the target samples. In this spirit, [119]

, proposed a Monte Carlo dropout based ensemble discriminator by gradually increasing the variance of the sample based distribution.

[123] extended the idea of learning with a mean teacher network [124] to domain adaptation considering a separate path for source and target sets and sampling independent batches making the batch normalization domain specific during the training process. [104] builds a teacher classifier, to provide pseudo-labels used by a class-conditional clustering loss to force the features from the same class to concentrate together and a conditional feature matching loss to align the clusters from different domains.

References

  • [1] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: a Deep Convolutional Activation Feature for Generic Visual Recognition. In ICML, 2014.
  • [2] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation Learning: a Review and New Perspectives. PAMI, 35(8), 2013.
  • [3] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How Transferable are Features in Deep Neural Networks? In NeurIPS, 2014.
  • [4] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3), 2015.
  • [5] Gabriela Csurka, Christopher R. Dance, Lixin Fan, Jutta Willamowski, and Cédric Bray. Visual Categorization with Bags of Keypoints. In ECCV Workshop (SLCV), 2004.
  • [6] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.

    ImageNet Classification with Deep Convolutional Neural Networks.

    In NeurIPS, 2012.
  • [7] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-scale Image Recognition. arXiv:1409.1556, 2014.
  • [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016.
  • [9] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir an Guelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. In CVPR, 2015.
  • [10] Gabriela Csurka, Fabien Baradel, Boris Chidlovskii, and Stéphane Clinchant. Discrepancy-Based Networks for Unsupervised Domain Adaptation: A Comparative Study. In ICCV Workshop (TASK-CV), 2017.
  • [11] Lluís Castrejón, Yusuf Aytar, Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Learning Aligned Cross-Modal Representations from Weakly Aligned Data. In CVPR, 2016.
  • [12] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M. Hospedales. Deeper, Broader and Artier Domain Generalization. In CVPR, 2017.
  • [13] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic Flow Kernel for Unsupervised Domain Adaptation. In CVPR, 2012.
  • [14] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting Visual Category Models to New Domains. In ECCV, 2010.
  • [15] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic Flow Kernel for Unsupervised Domain Adaptation. In CVPR, 2012.
  • [16] Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip S. Yu. Transfer Joint Matching for Unsupervised Domain Adaptation. In CVPR, 2014.
  • [17] Basura Fernando, Tatiana Tommasi, and Tinne Tuytelaars. Joint Cross-domain Classification and Subspace Learning for Unsupervised Adaptation. PRL, 65(1), 2015.
  • [18] Nazli FarajiDavar, Teófilo de Campos, and Josef Kittler. Adaptive Transductive Transfer Machines. In BMVC, 2014.
  • [19] Mahsa Baktashmotlagh, Mehrtash Harandi, and Mathieu Salzmann. Learning Domain Invariant Embeddings by Matching Distributions. In Gabriela Csurka, editor, Domain Adaptation in Computer Vision Applications

    , Advances in Computer Vision and Pattern Recognition, pages 95–114. Springer, 2017.

  • [20] Nicolas Courty, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. Optimal Transport for Domain Adaptation. PAMI, 39(9), 2017.
  • [21] Tatiana Tommasi, Novi Patricia, Barbara Caputo, and Tinne Tuytelaars. A Deeper Look at Dataset Bias. In Gabriela Csurka, editor, Domain Adaptation in Computer Vision Applications, Advances in Computer Vision and Pattern Recognition, pages 95–114. Springer, 2017.
  • [22] Baochen Sun, Jiashi Feng, and Kate Saenko. Return of Frustratingly Easy Domain Adaptation. In AAAI, 2016.
  • [23] Sumit Chopra, Suhrid Balakrishnan, and Raghuraman Gopalan.

    DLID: Deep Learning for Domain Adaptation by Interpolating Between Domains.

    In ICML Workshop, 2013.
  • [24] Brian Chu, Vashisht Madhavan, Oscar Beijbom, Judy Hoffman, and Trevor Darrell. Best Practices for Fine-tuning Visual Classifiers to New Domains. In ECCV Workshop (TASK-CV), 2016.
  • [25] Karsten M. Borgwardt, Arthur Gretton, Malte J. Rasch, Hans-Peter Kriegel, Bernhard Schölkopf, and Alex J. Smola. Integrating Structured Biological Data by Kernel Maximum Mean Discrepancy. Bioinformatics, 22, 2006.
  • [26] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning Transferable Features with Deep Adaptation Networks. In ICML, 2015.
  • [27] Baochen Sun and Kate Saenko. Deep CORAL: Correlation Alignment for Deep Domain Adaptation. In ECCV Workshop (TASK-CV), 2016.
  • [28] Raghuraman Gopalan, Ruonan Li, and Vishal M. Patel. Foundations and Trends in Computer Graphics and Vision. Now Publishers Inc., 2015.
  • [29] Gabriela Csurka. A Comprehensive Survey on Domain Adaptation for Visual Applications. In Gabriela Csurka, editor, Domain Adaptation in Computer Vision Applications, Advances in Computer Vision and Pattern Recognition, pages 1–35. Springer, 2017.
  • [30] Mei Wang and Weihong Deng. Deep Visual Domain Adaptation: A Survey. Neurocomputing, 312, 2018.
  • [31] Jane Bromley, James W. Bentz, Léon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Säckinger, and Roopak Shah. Signature Verification Using a ”Siamese” Time Delay Neural Network. IJPRAI, 7(04), 1993.
  • [32] Muhammad Ghifary, W. Bastiaan Kleijn, and Zhang Mengjie. Domain Adaptive Neural Networks for Object Recognition. In PRICAI, 2014.
  • [33] Werner Zellinger, Edwin Lughofer, Saminger-Platz Susanne, Thomas Grubinger, and Thomas Natschläger. Central Moment Discrepancy (CMD) for Domain-Invariant Representation Learning. In ICLR, 2017.
  • [34] Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein Distance Guided Representation Learning for Domain Adaptation. In AAAI, 2018.
  • [35] Yogesh Balaji, Rama Chellappa, and Soheil Feizi. Normalized Wasserstein Distance for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation . In ICCV, 2019.
  • [36] Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, and Nicolas Courty.

    DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation.

    In ECCV, 2018.
  • [37] Renjun Xu, Pelen Liu, Liyan Wang, Chao Chen, and Jindong Wang. Reliable Weighted Optimal Transport for Unsupervised Domain Adaptation. In CVPR, 2020.
  • [38] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In NeurIPS, 2014.
  • [39] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial Discriminative Domain Adaptation. In CVPR, 2017.
  • [40] Riccardo Volpi, Pietro Morerio, Silvio Savarese, and Vittorio Murino. Adversarial Feature Augmentation for Unsupervised Domain Adaptation, In CVPR, 2018.
  • [41] Eric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous Deep Transfer Across Domains and Tasks. In ICCV, 2015.
  • [42] Timnit Gebru, Judy Hoffman, and Li Fei-Fei. Fine-grained Recognition in the Wild: A Multi-Task Domain Adaptation Approach. In ICCV, 2017.
  • [43] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor S. Lempitsky. Domain-Adversarial Training of Neural Networks. JMLR, 2016.
  • [44] Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Multi-adversarial Domain Adaptation. In AAAI, 2018.
  • [45] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Faster R-CNN for Object Detection in the Wild. In CVPR, 2018.
  • [46] Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, and Kate Saenko. Strong-weak Distribution Alignment for Adaptive Object Detection. In CVPR, 2019.
  • [47] Xinge Zhu, Jiangmiao Pang, Ceyuan Yang, and Jianping Shi. Adapting Object Detectors via Selective Cross-Domain Alignment. In CVPR, 2019.
  • [48] Zhenwei He and Lei Zhang. Multi-Adversarial Faster-RCNN for Unrestricted Object Detection. In ICCV, 2019.
  • [49] Chang-Dong Xu, Xing-Ran Zhao, Xin Jin, and Wei Xiu-Shen. Exploring Categorical Regularization for Domain Adaptive Object Detection. In CVPR, 2020.
  • [50] Judy Hoffman, Dequan Wang, Fisher Yu, and Trevo Darrell. FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation. arXiv:1612.02649, 2016.
  • [51] Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kihyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. Learning to Adapt Structured Output Space for Semantic Segmentation. In CVPR, 2018.
  • [52] Junnan Li, Yongkang Wong, Qi Zhao, and Mohan Kankanhalli. Unsupervised Learning of View-invariant Action Representations . In NeurIPS, 2018.
  • [53] Jonathan Munro and Dima Damen. Multi-Modal Domain Adaptation for Fine-Grained Action Recognition. In ICCV Workshops , 2019.
  • [54] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan.

    Deep Transfer Learning with Joint Adaptation Networks.

    In ICML, 2017.
  • [55] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. Conditional Adversarial Domain Adaptation. In NeurIPS, 2018.
  • [56] Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael I. Jordan. Bridging Theory and Algorithm for Domain Adaptation. In ICML, 2019.
  • [57] Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, and Kate Saenko. Adversarial Dropout Regularization. In ICLR, 2018.
  • [58] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum Classifier Discrepancy for Unsupervised Domain Adaptation. In CVPR, 2018.
  • [59] Guoliang Kang, Lu Jiang, Yi Yang, and Alexander G. Hauptmann. Contrastive Adaptation Network for Unsupervised Domain Adaptation. In CVPR, 2019.
  • [60] Piotr Koniusz, Yusuf Tas, and Fatih Porikli.

    Domain Adaptation by Mixture of Alignments of Second- or Higher-Order Scatter Tensors.

    In CVPR, 2017.
  • [61] Artem Rozantsev, Mathieu Salzmann, and Pascal Fua. Beyond Sharing Weights for Deep Domain Adaptation. PAMI, 41(4), 2018.
  • [62] Artem Rozantsev, Mathieu Salzmann, and Pascal Fua. Residual Parameter Transfer for Deep Domain Adaptation. In CVPR, 2018.
  • [63] Róger Bermúdez-Chacón, Mathieu Salzmann, and Pascal Fua. Domain Adaptive Multibranch Networks. In ICLR, 2020.
  • [64] Fabio Maria Carlucci, Lorenzo Porzi, Barbara Caputo, Elisa Ricci, and Samuel Rota Bulò. AutoDIAL: Automatic DomaIn Alignment Layers. In ICCV, 2017.
  • [65] Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Adaptive Batch Normalization for practical domain adaptation. PR, 80(8), 2018.
  • [66] Woong-Gi Chang, Tackgeun You, Seonguk Seo, Suha Kwak, and Bohyung Han. Domain-Specific Batch Normalization for Unsupervised Domain Adaptation. In CVPR, 2019.
  • [67] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. Towards Discriminability and Diversity: Batch Nuclear-Norm Maximization Under Label Insufficient Situations. In CVPR, 2020.
  • [68] Massimiliano Mancini, Samuel Rota Bulò, Barbara Caputo, and Elisa Ricci. Unifying Predictive and Continuous Domain Adaptation through Graphs. In CVPR, 2019.
  • [69] Toby Perrett and Dima Damen. DDLSTM: Dual-Domain LSTM for Cross-Dataset Action Recognition. In CVPR, 2019.
  • [70] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Domain Adaptation for Large-scale Sentiment Classification: a Deep Learning Approach. In ICML, 2011.
  • [71] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol.

    Extracting and Composing Robust Features with Denoising Autoencoders.

    In ICML, 2008.
  • [72] Minmin Chen, Zhixiang Xu, Kilian Q. Weinberger, and Fei Sha. Marginalized Denoising Autoencoders for Domain Adaptation. In ICML, 2012.
  • [73] Gabriela Csurka, Boris Chidlovskii, Stéphane Clinchant, and Sophia Michel. Unsupervised Domain Adaptation with Regularized Domain Instance Denoising. In ECCV Workshop (TASK-CV), 2016.
  • [74] Muhammad Ghifary, W. Bastiaan Kleijn, Mengjie Zhang, and David Balduzzi. Deep Reconstruction-classification Networks for Unsupervised Domain Adaptation. In ECCV, 2016.
  • [75] Matthew D. Zeiler, Dilip Krishnan, Graham W. Taylor, and Rob Fergus. Deconvolutional Networks. In CVPR, 2010.
  • [76] Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dumitru Erhan, and Dilip Krishnan. Domain Separation Networks. In NeurIPS, 2016.
  • [77] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture Synthesis Using Convolutional Neural Networks. In NeurIPS, 2015.
  • [78] Sun Huang and Serge Belongie. Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. In ICCV, 2017.
  • [79] Yijun Li, Ming-Yu Liu, Xueting Li, Ming-Hsuan Yang, and Jan Kautz. A Closed-form Solution to Photorealistic Image Stylization. In ECCV, 2018.
  • [80] Christopher Thomas and Adriana Kovashka. Artistic Object Recognition by Unsupervised Style Adaptation. In ACCV, 2019.
  • [81] Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S. Paek, and In So Kweon. Pixel-Level Domain Transfer. In ECCV, 2016.
  • [82] Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dili Krishnan. Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks. In CVPR, 2017.
  • [83] Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised Cross-domain Image Generation. In ICLR, 2017.
  • [84] Ming-Yu Liu and Oncel Tuzel. Coupled Generative Adversarial Networks. In NeurIPS, 2016.
  • [85] Ming-Yu Liu, Thomas Breuel, and Jan Kautz.

    Unsupervised Image-to-Image Translation Networks.

    In NeurIPS, 2017.
  • [86] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In ICCV, 2017.
  • [87] Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, and Trevor Darrel. CyCADA: Cycle-Consistent Adversarial Domain Adaptation. In ICML, 2018.
  • [88] Zak Murez, Soheil Kolouri, David Kriegman, Ravi Ramamoorthi, and Kyungnam Kim. Image to Image Translation for Domain Adaptation. In CVPR, 2018.
  • [89] Swami Sankaranarayanan, Yogesh Balaji, Arpit Jain, Ser Nam Lim, and Rama Chellappa. Learning from Synthetic Data: Addressing Domain Shift for Semantic Segmentation. In CVPR, 2018.
  • [90] Zuxuan Wu, Xintong Han, Yen-Liang Lin, Mustafa Gokhan Uzunbas, Tom Goldstein, Ser Nam Lim, and Larry S. Davis. DCAN: Dual Channel-wise Alignment Networks for Unsupervised Scene Adaptation. In ECCV, 2018.
  • [91] Wei-Lun Chang, Hui-Po Wang, Wen-Hsiao Peng, and Wei-Chen Chiu. All about Structure: Adapting Structural Information across Domains for Boosting Semantic Segmentation. In CVPR, 2019.
  • [92] Yanchao Wang, Dong Lao, Ganesh Sundaramoorthi, and Stefano Soatto. Phase Consistent Ecological Domain Adaptation. In CVPR, 2020.
  • [93] Sławomir Bak, Peter Carr, and Jean-François Lalonde. Domain Adaptation through Synthesis for Unsupervised Person Re-identification. In ECCV, 2018.
  • [94] Weijian Deng, Liang Zheng, Qixiang Ye, Guoliang Kang, Yi Yang, and Jianbin Jiao. Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification. In CVPR, 2018.
  • [95] Yanbei Chen, Xiatian Zhu, and Shaogang Gong. Instance-Guided Context Rendering for Cross-Domain Person Re-Identification. In ICCV, 2019.
  • [96] Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. Asymmetric Tri-training for Unsupervised Domain Adaptation. In ICML, 2017.
  • [97] Weijian Deng, Liang Zheng, Yifan Sun, and Jianbin Jiao. Rethinking Triplet Loss for Domain Adaptation. TCSVT, Early access, 2020.
  • [98] Gabriela Csurka, Boris Chidlovskii, and Florent Perronnin. Domain Adaptation with a Domain Specific Class Means Classifier. In ECCV Workshop(TASK-CV), 2014.
  • [99] Yingwei Pan, Ting Yao, Yehao Li, Yu Wang, Chong-Wah Ngo, and Tao Mei. Transferrable Prototypical Networks for Unsupervised Domain Adaptation. In CVPR, 2019.
  • [100] Vivek Sharma, Naila Murray, Diane Larlus, M. Saquib Sarfraz, Rainer Stiefelhagen, and Gabriela Csurka. Unsupervised Meta-Domain Adaptation for Fashion Retrieval. In WACV, 2020.
  • [101] Ozan Sener, Hyun Oh Song, Ashutosh Saxena, and Silvio Savarese. Learning Transferrable Representations for Unsupervised Domain Adaptation. In NeurIPS, 2016.
  • [102] Tatiana Tommasi and Barbara Caputo. Frustratingly Easy NBNN Domain Adaptation. In ICCV, 2013.
  • [103] Yabin Liang, Bin Deng, Kui Jia, and Lei Zhang. Label Propagation with Augmented Anchors: A Simple Semi-Supervised Learning baseline for Unsupervised Domain Adaptation. In ECCV, 2020.
  • [104] Zhijie Deng, Yucen Luo, and Jun Zhu. Cluster Alignment with a Teacher for Unsupervised Domain Adaptation. In ICCV, 2019.
  • [105] Yixiao Ge, Feng Zhu, Rui Zhao, and Hongsheng Li. Structured Domain Adaptation with Online Relation Regularization for Unsupervised Person Re-ID. arXiv:2003.06650, 2020.
  • [106] Yunsheng Li, Lu Yuan, and Nuno Vasconcelos. Bidirectional Learning for Domain Adaptation of Semantic Segmentation. In CVPR, 2019.
  • [107] Zhonghao Wang, Mo You, Yunchao Wei, Rogerio Feris, Jinjun Xiong, Wen-mei Hwu, Thomas S. Huang, and Humphrey Shi. Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation. In CVPR, 2020.
  • [108] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum Learning. In ACMMM, 2009.
  • [109] Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Samuel Rota Bulò, Nicu Sebe, and Elisa Ricci. Unsupervised Domain Adaptation using Feature-Whitening and Consensus Loss. In CVPR, 2019.
  • [110] Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. Collaborative and Adversarial Network for Unsupervised Domain Adaptation. In CVPR, 2018.
  • [111] Chaoqi Chen, Weiping Xie, Wenbing Huang, Yu Rong, Xinghao Ding, Yue Huang, Tingyang Xu, and Junzhou Huang. Progressive Feature Alignment for Unsupervised Domain Adaptation. In CVPR, 2019.
  • [112] Yang Shu, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Transferable Curriculum for Weakly-Supervised Domain Adaptation. In AAAI, 2019.
  • [113] Hehe Fan, Liang Zheng, and Yi Yang. Unsupervised Person Re-identification: Clustering and Fine-tuning. arXiv:1705.10444, 2017.
  • [114] Xinyu Zhang, Jiewei Cao, Chunhua Shen, and Mingyu You. Self-Training with Progressive Augmentation for Unsupervised Cross-Domain Person Re-Identification. In ICCV, 2019.
  • [115] Yang Fu, Yunchao Wei, Guanshuo Wang, Yuqian Zhou, Honghui Shi, and Thomas S. Huang. Self-similarity Grouping: A Simple Unsupervised Cross Domain Adaptation Approach for Person Re-identification. In ICCV, 2019.
  • [116] Yang Zou, Zhiding Yu, B.V.K. Vijaya Kumar, and Jinsong Wang. Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training. In ECCV, 2018
  • [117] Liang Du, Jingang Tan, Hongye Yang, Jianfeng Feng, Xiangyang Xue, Qibao Zheng, Xiaoqing Ye, and Xiaolin Zhang. SSF-DAN: Separated Semantic Feature Based Domain Adaptation Network for Semantic Segmentation. In ICCV, 2019.
  • [118] Fei Pan, Inkyu Shin, Francois Rameau, Seokju Lee, and In So Kweon. Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision. In CVPR, 2020.
  • [119] Kurmi Vinod Kumar, Vipul Bajaj, Venkatesh K. Subramanian, and Vinay P Namboodiri. Curriculum based Dropout Discriminator for Domain Adaptation. In BMVC, 2019.
  • [120] Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A DIRT-T Approach to Unsupervised Domain Adaptation. In ICLR, 2018.
  • [121] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Semi-Supervised Domain Adaptation via Minimax Entropy,. In ICCV, 2019.
  • [122] Tuan-Hung Vu, Himalaya Jain, Maxime Bucher, Mathieu Cord, and Patrick Pérez. ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation. In CVPR, 2019.
  • [123] Geoff French, Michal Mackiewicz, and Mark Fisher. Self-ensembling for Visual Domain Adaptation. In ICLR, 2018.
  • [124] Antti Tarvainen and Harri Valpola. Mean Teachers are Better Role Models: Weight-averaged Consistency Targets Improve Semi-supervised Deep Learning Results. In NeurIPS, 2017.