Despite making remarkable progress in classification tasks over the past decades, deep neural network models still suffer poor generalization performance to another new domain (e.g. classifying real-world object images using a classification model trained on simulated object imagesPeng et al. (2017)), due to the well known dataset shift Quionero-Candela et al. (2009) or domain shift Tommasi et al. (2016) problem. Hence, lots of research efforts have been devoted to developing domain adaptation (DA) methods Gong et al. (2012); Ganin et al. (2016); Hoffman et al. (2016); Tsai et al. (2018) to make the source model more adaptable to the new target domains.
In this paper, we mainly focus on unsupervised domain adaptation (UDA) for object recognition where no labeled data are available in the target domain. Recently, deep domain adaptation approaches have almost dominated this field with promising results Long et al. (2015); Ganin et al. (2016); Long et al. (2018); Lee et al. (2019a); Kang et al. (2019); Cicek and Soatto (2019), which try to learn domain-invariant feature representations that achieve small error on the source domain. They expect the learned representations together with the classifier learned from the source domain can generalize to the target domain. Since marginal distribution alignment in Ganin and Lempitsky (2015); Long et al. (2015) is not sufficient to guarantee successful domain adaptation Zhao et al. (2019), pseudo labels on the target domain, providing conditional information, are employed to align class-conditional distributions Long et al. (2018); Cicek and Soatto (2019). However, as shown in Fig. 1, the learned classifier is inevitably biased to the labeled source data, making generated pseudo labels on the target domain inaccurate and unreliable.
To tackle this issue, we propose a new approach called Self-Taught Labeling (SeTL) that discovers a target-specific classifier to produce reliable predictions rather than simply relying on biased ones from the source classifier. Intuitively, with unbiased accurate pseudo labels for unlabeled target data, one can implicitly and semantically align the data features from different domains through a standard classification loss, so as to get rid of tedious feature-level domain alignment. Different from most-favored feature-level alignment and pixel-level transfer Hoffman et al. (2018); Sankaranarayanan et al. (2018), this provides a new perspective for DA problems. Since no labeled data is available in the target domain, SeTL introduces a memory module to store the historical information (i.e., features and classifier predictions) of unlabeled target samples as self-supervision. Through the memory module, SeTL performs neighborhood aggregation to obtain both pseudo labels and their corresponding confidences, which directly promotes message-passing within the neighborhood in the target domain, without introducing any extra parameters.
Specifically, for each target sample, SeTL retrieves a few nearest neighbors based on their feature similarity and aggregates their associated classifier predictions into the pseudo label for the target sample. SeTL uses the pseudo labels and confidence weights derived from the aggregated prediction as self-teaching supervision over the unlabeled data. This provides a regularization to the source classification loss and helps the data feature adaptation. This aggregation strategy works well since it can leverage the target samples with high confidence (i.e. source-like samples) in the memory bank to help learn a reliable classifier. SeTL is general and can be applied to various DA tasks.
Despite its simplicity, we find that SeTL achieves competitive or better results than state-of-the-art on multiple domain adaptation benchmarks. Besides, SeTL can also be seamlessly integrated into existing domain adaptation methods and further boost their transferability. Furthermore, SeTL also works well for semi-supervised learning (SSL) where only a small amount of labeled data is available for model training.
To sum up, we make the following contributions. We present SeTL, a novel approach to combat domain shift that provides an alternative to the most-favored feature-level alignment and the pixel-level transfer methods. Though it is simple, SeTL is able to fully promote self-teaching among the target domain with an auxiliary memory module. The SeTL performs outstandingly well on multiple benchmarks for UDA, Semi-supervised DA, and SSL with few annotated data points. We hope SeTL can be inspiring for further works on domain adaptation.
2 Related Work
Since this paper mainly focuses on the UDA problem, we first introduce some related existing deep domain adaptation approaches. More comprehensive overviews are provided in Csurka (2017); Kouw and Loog (2019); Wilson and Cook (2020). From another viewpoint, without direct domain alignment, our method could also be considered as a regularization approach for transductive learning, thus we also discuss related studies on this topic. At last, several works involved with memory mechanism are analyzed.
2.1 Deep Domain Adaptation
Deep domain adaptation methods leverage deep neural networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. Generally, the weights of the deep architecture containing a feature encoder and a classifier layer are shared for both domains, and various distribution discrepancy measuresTzeng et al. (2014); Long et al. (2015); Ganin and Lempitsky (2015) are developed to promote domain confusion in the feature space. Maximum mean discrepancy (MMD) Gretton et al. (2007) and -distance Ben-David et al. (2010) are two most favored measures among them. To circumvent the problem that marginal distribution alignment cannot guarantee different domains are semantically aligned, following works Long et al. (2018); Cicek and Soatto (2019) exploit pseudo labels on the target domain to perform conditional distribution alignment. The learned classifier still fails to generalize well on the target domain, as it is mainly built on the labeled source data.
Another line of research Long et al. (2016); Rozantsev et al. (2018); Liang et al. (2020) exploits the individual characteristics of each domain by dropping the weight-sharing assumption fully or partially. Shu et al. Shu et al. (2018) propose non-conservative domain adaptation and incrementally refine the preciously learned classification boundary to fit the target domain only. With the classifier shared, Tzeng et al. Tzeng et al. (2017) first learn the source feature encoder and then the target feature encoder sequentially. While Bousmalis et al. Bousmalis et al. (2016) jointly learn the domain-shared encoder and domain-specific private encoders. Besides, Chang et al. Chang et al. (2019)
share all other model parameters but specialize batch normalization layers within the feature encoder. Liang et al.Liang et al. (2020) learn the target-specific feature extractor while only operating on the hypotheses induced from the source data. Compared with these methods, SeTL does not introduce any new layers and aims to learn one shared classifier for both domains with a virtual target-specific classifier.
2.2 Regularization for Transductive Learning
Besides the classification objective for labeled data, SSL methods Zhu (2005) generally resort to the cluster assumption or low-density separation assumption to fully exploit unlabeled data, e.g., Shannon entropy minimization Grandvalet and Bengio (2005). An alternative termed ‘Pseudo-Label’ is developed in Lee (2013) to progressively treat high-confidence predictions on unlabeled data as true labels and employ a standard cross-entropy loss. Following works Shi et al. (2018); Deng et al. (2019) incorporate pseudo labels to perform discriminative clustering for features of unlabeled data. Besides, Miyato et al. Miyato et al. (2018) propose the VAT loss to measure local smoothness of the conditional label distribution around each input data point against local perturbation. In fact, both UDA and SSL belong to the transductive learning; the only difference between them is that labeled data and unlabeled data are sampled from different distributions in UDA. Recent studies Chen et al. (2019a); Cui et al. (2020); Jin et al. (2020) show that regularization terms on unlabeled data without explicit feature-level domain alignment achieve promising adaptation results. In particular, the MaxSquare loss is elegantly designed in Chen et al. (2019a) to prevent the training process from being dominated by easy-to-transfer samples in the target domain. In contrast, the diversity of conditional predictions is first considered through batch nuclear-norm maximization Cui et al. (2020) and class confusion minimization Jin et al. (2020), respectively.
2.3 Transductive Learning with Memory Mechanism
A memory module can be read and written to remember past facts, making information across different mini-batches interactive and enabling more powerful learning for challenging tasks like question answering Sukhbaatar et al. (2015). A recent study Chen et al. (2018) first exploits the memory mechanism in the network training for SSL and computes the memory prediction for each training sample by the key addressing and value reading. Inspired by instance discrimination Wu et al. (2018), Saito et al. Saito et al. (2020) employ a memory bank and propose an entropy minimization loss to encourage neighborhood clustering in the target domain. Besides, Zhong et al. Zhong et al. (2019) leverage an exemplar memory module that saves up-to-date features for target data and computes the invariance learning loss for unlabeled target data. Among them, Chen et al. (2018) is the most closely related work to ours, but Chen et al. (2018) is proposed for SSL that only utilizes the labeled data for memory update and ignores self-learning in the unlabeled data.
In the UDA task, we are given a labeled source domain with categories and an unlabeled target domain , while in semi-supervised domain adaptation (SSDA), we are given an additional labeled subset of the target domain . To be clear, denotes the entire target domain, and UDA has an empty . This paper focuses on the vanilla closed-set setting, i.e., two domains share the same categories. The ultimate goal of both UDA and SSDA is to label the target samples in via training the model on .
As shown in Fig. 1, we employ the widely-used architecture Ganin and Lempitsky (2015) which consists of two basic modules, a feature extractor and a classifier . Based on where to align, UDA approaches can be roughly categorized into three main cases, i.e., pixel-level Hoffman et al. (2018); Sankaranarayanan et al. (2018), feature-level Ganin and Lempitsky (2015); Tzeng et al. (2017); Long et al. (2018); Li et al. (2020a) and output-level Chen et al. (2019a); Cui et al. (2020); Jin et al. (2020). Pixel-level transfer is time-consuming and output-level regularization is sensitive to inaccurate model prediction, thus much DA research has been devoted to feature-level domain alignment. Prior studies Long et al. (2018); Cicek and Soatto (2019); Li et al. (2020a) further show better feature alignment can be achieved with the aid of noisy output-level predictions.
with the maximum predicted probability as true labels each time the weights are updated. Since the pseudo labels are not equally confident, in this work, we readily take the maximum predicted probabilities as weights and incorporate them into the standard cross-entropy loss, forming the following objective to adapt the model with unlabeled data:
Shannon entropy is employed to measure the class overlap. However, both regularization approaches Lee (2013); Grandvalet and Bengio (2005) and another recent regularization method Chen et al. (2019a) ignore the structure of unlabeled data and only focus on the instance-wise prediction itself.
Considering the prediction diversity among unlabeled data, Jin et al. Jin et al. (2020) propose to minimize the pair-wise class confusion within a mini-batch of training data. In that way, the overlap between any two classes can be reduced as well as the classification ambiguity. Besides, Cui et al. Cui et al. (2020) pursue a lower output matrix rank within a mini-batch to ensure both discriminability and diversity. Both approaches have been proven to achieve much better results than vanilla entropy minimization, implying that the structure of the classification output matrix is essential for unlabeled data. Though these output-level regularization methods Chen et al. (2019a); Cui et al. (2020) are originally proposed to make full use of unlabeled data without the assumption of domain shift, they still have achieved competitive performance with feature-level alignment methods for domain adaptation.
3.2 Self-Taught Labeling
In this paper, we propose a new regularization approach called self-taught labeling (SeTL) that fully exploits the structure of unlabeled data to get reliable pseudo labels under domain shift. Different from Liang et al. (2019) that employs the nearest centroid classifier with the assumption of centroid shift, SeTL aims to learn an extra specific classifier for the target domain. However, it is quite challenging to learn without labeled target data. Fortunately, according to a prior study Long et al. (2018), there exist some source-like samples whose output predictions are reliable, which can be used to help build the classifier proposed here and teach the remaining samples sequentially. To avoid the trivial sample selection and alternate training, SeTL employs a memory module that stores both the features and the output predictions of all the target samples to obtain more accurate pseudo labels intermediately. We describe the three main steps in SeTL as follows.
As , the probability collapses to a point mass like Pseudo-Label Lee (2013). Then, the sharpened prediction
along with its L2-normalized feature vectoris written in the memory module based on the index. Here we do not adopt any moving average strategies for updating.
Neighborhood aggregation. With the memory module consisting of features and predictions, we can easily train a classifier by mapping features to predictions. However, the memory module keeps updating every mini-batch, and the training procedure involving extra parameters would be time-consuming. To address this, we present a non-parametric neighborhood aggregation strategy as to approximate . We first retrieve
nearest neighbors from the memory module for each sample in the current mini-batch based on the cosine similarity between their features. Then, we aggregate corresponding predictions of these nearest neighbors by taking the average,
where denotes the index set of neighbors in the memory module for the data point . In this manner, we obtain a new probability prediction via learning on the entire target data. Note that our strategy indeed considers the global structure beyond regularization within a mini-batch like Cui et al. (2020); Jin et al. (2020).
Pseudo-labeling. For each unlabeled datum , we get the pseudo label by choosing the category index with the maximum probability prediction , i.e., . Considering different neighborhoods lie in regions of different densities, it is desirable to assign a larger weight for the target data in a neighborhood of higher density. Intuitively, the larger the maximum value is, the higher density it will be for the region the datum lies in. Thus, we directly utilize as the confidence (weight) for the pseudo label . Finally, a weighted cross-entropy loss is imposed on the unlabeled target data as below,
Concerning the labeled data in , we employ the stand cross-entropy loss with label-smoothing regularization Szegedy et al. (2016), denoted as and , respectively. Integrating these losses together, we obtain the final objective for UDA and SSDA as follows,
where is a trade-off parameter. Actually, we can readily incorporate into other domain alignment methods like CDAN Long et al. (2018) as an additional loss. Besides, for SSL methods like MixMatch Berthelot et al. (2019), we just replace
with the one-hot encoding ofin the label guessing step.
Datasets. We use four benchmark datasets in our experiments, introduced as follows.
Office-31 Saenko et al. (2010) is the most widely-used benchmark in the DA field, which consists of three different domains in 31 categories: Amazon (A) with 2,817 images, Webcam (W) with 795 images, and DSLR (D) with 498 images. There are six transfer tasks for evaluation in total.
Office-Home Venkateswara et al. (2017) is another popular benchmark that consists of images from four different domains: Artistic (A) images, Clip Art (C), Product (P) images, and Real-World (R) images, totally around 15,500 images from 65 different categories. All twelve transfer tasks are selected for evaluation.
VisDA-C Peng et al. (2017) is a large-scale benchmark used for the Visual Domain Adaptation Challenge 2017 that consists of two very distinct kinds of images from twelve common object classes, i.e., 152,397 synthetic images and 55,388 real images. We focus on the challenging synthetic-to-real transfer task.
DomainNet-126 is a subset of DomainNet Peng et al. (2019), by far the largest UDA dataset with six distinct domains and approximately 0.6 million images distributed among 345 categories. Following Saito et al. (2019), we pick four domains (Real (R), Clipart (C), Painting (P), Sketch (S)), and 126 classes for evaluation.
Implementation Details. 111Code will be available at https://github.com/tim-learn/SeTL.
We utilize all the source and target samples and report the average classification accuracy and standard deviation over 3 random trials. All the methods including domain alignment methodsLong et al. (2018); Chen et al. (2019b), semi-supervised methods Berthelot et al. (2019), and regularization approaches Lee (2013); Grandvalet and Bengio (2005); Chen et al. (2019a); Jin et al. (2020); Cui et al. (2020) are implemented based on PyTorch. Note that MixMatch Berthelot et al. (2019) could be considered as a strong domain adaptation baseline Rukhovich and Galeev (2019). Besides, we select other state-of-the-art UDA approaches Xu et al. (2019); Zou et al. (2019); Kurmi et al. (2019); Li et al. (2020a); Lee et al. (2019b); Li et al. (2020b) and SSDA approaches Saito et al. (2019) for further comparison. For the trade-off parameter, we adopt a linear rampup scheduler from 0 to for all methods, and our method uses
. We adopt mini-batch SGD to learn the feature encoder by fine-tuning from the ImageNet pre-trained model with the learning rate 0.001, and new layers (bottleneck layer and classification layer) from scratch with the learning rate 0.01. We use the suggested training settings inLong et al. (2018), including learning rate scheduler, momentum (0.9), weight decay (1e), bottleneck size (256), and batch size (36).
Results of UDA. We use four datasets as introduced above for vanilla UDA tasks, with results shown in Tables 14. On the small-sized Office-31 dataset, we first study different regularization approaches when integrated with the source classification loss only. It is obvious that both MCC Jin et al. (2020) and BNM Cui et al. (2020) consistently perform better than instance-wise regularization methods like MinEnt Grandvalet and Bengio (2005), which verifies the importance of local diversity. SeTL outperforms MCC and BNM in 5 out of 6 tasks, obtaining the best average accuracy. To save space, we select the best-performing counterpart BNM for later comparison. When combined with state-of-the-art UDA methods Long et al. (2018); Chen et al. (2019b), the average accuracy of both methods increases accordingly, and SeTL still performs the best. Since Office-31 is relatively small, MixMatch Berthelot et al. (2019) performs worse than CDAN. Using pseudo labels provided by SeTL, MixMatch obtains boosted performance. Besides, SeTL achieves competitive performance with state-of-the-art UDA method like ATM Li et al. (2020a) without any explicit feature-level alignment. SeTL incorporated in the UDA method Chen et al. (2019b) achieves the best performance on the Office-31 dataset.
For VisDA-C and Office-Home, we compare the performance between BNM and SeTL with or without domain alignment, respectively. As shown in Table 2, SeTL clearly performs better than BNM w.r.t. mean accuracy for both situations. Note, SeTL combined with MixMatch obtains the state-of-the-art mean accuracy 86.5% for VisDA-C, which outperforms recent UDA methods Xu et al. (2019); Zou et al. (2019); Lee et al. (2019b). Taking a closer look at Table 3, we observe similar results for Office-Home that SeTL beats BNM in terms of mean accuracy. Since VisDA-C only contains 12 classes in total, it is necessary to introduce DomainNet-126 as a new large-scale UDA testbed. Table 4 again validates the effectiveness of the proposed SeTL. Compared with medium-sized Office-Home, SeTL shows even larger advantages over BNM for large-scale datasets like VisDA-C and DomainNet-126.
Results of SSDA. We follow the settings in MME Saito et al. (2019) and evaluate SSDA methods on two benchmark datasets: Office-Home and DomainNet-126. For each dataset, there exist two SSDA settings, i.e., 1-shot and 3-shot, where each class in the target domain has one or three labeled data points, respectively. As shown in Table 5, SeTL outperforms both BNM and MCC for both settings, and MixMatch also benefits from the incorporation of SeTL. Comparing the results of SeTL under 1-shot and 3-shot, we find the difference between them is relatively small, implying that SeTL can fully exploit the unlabeled data to compensate for the scarcity of labeled data. We can draw similar conclusions on the Office-Home dataset from Table 6. Moreover, compared with prior state-of-the-art SSDA results in Saito et al. (2019), both SeTL and its combination with MixMatch achieve better performance for both datasets under both settings.
Results of SSL. We also evaluate SeTL in the case without domain shift. Here we focus on a special case of SSL where annotated samples are very scarce. For simplicity, we adopt the same three-shot setting in SSDA for the SSL task. Especially, we take labeled target data as the labeled set and unlabeled target data as the unlabeled set, forming the scarce-labeled SSL task.
As shown in Table 7, SeTL performs the best on both Office-Home and DomainNet-126. For such a scarce-labeled SSL task, MixMatch performs badly. The reason may be that labeled data are quite scarce, resulting in low-quality pseudo labels and thus bringing much noise in the following mixup step. Taking full advantage of unlabeled data, SeTL can improve the quality of pseudo labels and significantly boost the performance of MixMatch when replacing the label guessing process in Mixmatch with our SeTL. Benefited from a large amount of unlabeled data, SeTL outperforms BNM and MCC for SSL tasks on DomainNet-126 with a larger margin than that on Office-Home.
|(a) Convergence||(b) t-SNE visualizations of features learned by ResNet-50 He et al. (2016), Pseudo-Label Lee (2013), and SeTL (ours)|
4.3 Model Analysis
We study the convergence of SeTL and the ramp-up of , and make comparisons with BNM in Fig. 2(a). Comparing both methods with or without the ramp-up, it is easy to verify the effectiveness of linear ramp-up. Since the pseudo labels or original classifier outputs in the early stage are not reliable enough, using a ramp-up to progressively increase the regularization weight is desirable for both SeTL and BNM. Besides, with the iteration number increasing, the accuracy of SeTL grows up and converges at last. Furthermore, we employ the t-SNE visualization Maaten and Hinton (2008) in Fig. 2(b) to show whether features from different domains are well aligned even without explicit domain alignment. Compared with ResNet-50 and Pseudo-Label, features from both domains learned by SeTL are semantically aligned and more favorable.
|SeTL (default, )||89.8||80.5|
|SeTL w/o weight||89.5 ()||79.9 ()|
|SeTL w/ temperature||89.4 ()||80.0 ()|
|SeTL w/ neighborhood size||84.8 ()||79.8 ()|
|SeTL w/ neighborhood size||88.0 ()||80.5 (-)|
|SeTL w/ parameter||90.0 ()||78.8 ()|
|SeTL w/ parameter||89.3 ()||81.0 ()|
We further conduct ablation on Office-31 and VisDA-C for UDA and show average accuracy in Table 8. Comparing results in the first three rows, we find both weighting and sharpening strategies are effective. Besides, we study the neighborhood size for SeTL and find a larger value of can bring better performance. In particular, on the small Office-31 dataset, using is quite risky and achieves worse results. Regarding another parameter , we discover is a suitable choice for both datasets. For the large-scale VisDA-C dataset, the learned pseudo labels are more reliable, so a large value of is beneficial.
We presented SeTL, a new regularization approach to address dataset shift for domain adaptation tasks. Despite the simplicity, extensive experiments demonstrated that SeTL outperforms both domain alignment methods and other regularization methods with consistent margins on UDA, SSDA, and even scarce-labeled SSL tasks. In the future, we would like to extend SeTL to other challenging transfer tasks like universal DA Saito et al. (2020); You et al. (2019) and dense labeling tasks like semantic segmentation Tsai et al. (2018); Chen et al. (2019c).
-  (2010) A theory of learning from different domains. Mach. Learn. 79 (1-2), pp. 151–175. Cited by: §2.1.
-  (2019) Mixmatch: a holistic approach to semi-supervised learning. In Proc. NeurIPS, Cited by: §3.2, §3.2, §4.1, §4.2, Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7.
-  (2016) Domain separation networks. In Proc. NeurIPS, Cited by: §2.1.
-  (2019) Domain-specific batch normalization for unsupervised domain adaptation. In Proc. CVPR, Cited by: §2.1.
-  (2019) Domain adaptation for semantic segmentation with maximum squares loss. In Pro. ICCV, Cited by: §2.2, §3.1, §3.1, §3, §4.1, Table 1, Table 7.
-  (2019) Transferability vs. discriminability: batch spectral penalization for adversarial domain adaptation. In Proc. ICML, Cited by: §4.1, §4.2, Table 1, Table 2, Table 3, Table 4.
-  (2018) Semi-supervised deep learning with memory. In Proc. ECCV, Cited by: §2.3.
-  (2019) Crdoco: pixel-level domain transfer with cross-domain consistency. In Proc. CVPR, Cited by: §5.
-  (2019) Unsupervised domain adaptation via regularized conditional alignment. In Proc. ICCV, Cited by: §1, §2.1, §3.
A comprehensive survey on domain adaptation for visual applications.
Domain adaptation in computer vision applications, pp. 1–35. Cited by: §2.
-  (2020) Towards discriminability and diversity: batch nuclear-norm maximization under label insufficient situations. In Proc. CVPR, Cited by: §2.2, §3.1, §3.2, §3, Figure 2, §4.1, §4.2, Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7.
-  (2019) Cluster alignment with a teacher for unsupervised domain adaptation. In Proc. ICCV, Cited by: §2.2.
Unsupervised domain adaptation by backpropagation. In Proc. ICML, Cited by: §1, §2.1, §3, Table 6.
-  (2016) Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17 (1), pp. 2096–2030. Cited by: §1, §1.
-  (2012) Geodesic flow kernel for unsupervised domain adaptation. In Proc. CVPR, Cited by: §1.
-  (2005) Semi-supervised learning by entropy minimization. In Proc. NeurIPS, Cited by: §2.2, §3.1, §4.1, §4.2, Table 1, Table 7.
-  (2007) A kernel method for the two-sample-problem. In Proc. NeurIPS, Cited by: §2.1.
-  (2017) On calibration of modern neural networks. In Proc. ICML, Cited by: §3.2.
-  (2016) Deep residual learning for image recognition. In Proc. CVPR, Cited by: Figure 2, Table 1, Table 2, Table 3, Table 4, Table 5, Table 7.
-  (2018) CyCADA: cycle-consistent adversarial domain adaptation. In Proc. ICML, Cited by: §1, §3.
-  (2016) Fcns in the wild: pixel-level adversarial and constraint-based adaptation. arXiv preprint arXiv:1612.02649. Cited by: §1.
-  (2020) Minimum class confusion for versatile domain adaptation. In Proc. ECCV, Cited by: §2.2, §3.1, §3.2, §3, §4.1, §4.2, Table 1, Table 5, Table 6, Table 7.
-  (2019) Contrastive adaptation network for unsupervised domain adaptation. In Proc. CVPR, Cited by: §1.
-  (2019) A review of domain adaptation without target labels. IEEE Trans. Pattern Anal. Mach. Intell. (), pp. 1–1. Cited by: §2.
-  (2019) Attending to discriminative certainty for domain adaptation. In Proc. CVPR, Cited by: §4.1, Table 1, Table 3.
-  (2019) Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proc. CVPR, Cited by: §1.
-  (2013) Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, Cited by: §2.2, §3.1, §3.2, Figure 2, §4.1, Table 1, Table 7.
-  (2019) Drop to adapt: learning discriminative features for unsupervised domain adaptation. In Proc. ICCV, Cited by: §4.1, §4.2, Table 2.
-  (2020) Maximum density divergence for domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. (), pp. 1–1. Cited by: §3, §4.1, §4.2, Table 1.
-  (2020) Domain conditioned adaptation network. In Proc. AAAI, Cited by: §4.1, Table 3.
-  (2019) Distant supervised centroid shift: a simple and efficient approach to visual domain adaptation. In Proc. CVPR, Cited by: §3.2.
-  (2020) Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In Proc. ICML, Cited by: §2.1.
-  (2015) Learning transferable features with deep adaptation networks. In Proc. ICML, Cited by: §1, §2.1.
-  (2018) Conditional adversarial domain adaptation. In Proc. NeurIPS, Cited by: §1, §2.1, §3.2, §3.2, §3, §4.1, §4.2, Table 1, Table 2, Table 3, Table 4.
-  (2016) Unsupervised domain adaptation with residual transfer networks. In Proc. NeurIPS, Cited by: §2.1.
-  (2008) Visualizing data using t-sne. J. Mach. Learn. Res. 9 (Nov), pp. 2579–2605. Cited by: §4.3.
-  (2018) Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 41 (8), pp. 1979–1993. Cited by: §2.2.
-  (2019) Moment matching for multi-source domain adaptation. In Proc. ICCV, Cited by: §4.1.
-  (2017) Visda: the visual domain adaptation challenge. arXiv preprint arXiv:1710.06924. Cited by: §1, §4.1.
Dataset shift in machine learning. Cited by: §1.
-  (2018) Beyond sharing weights for deep domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. 41 (4), pp. 801–814. Cited by: §2.1.
-  (2019) MixMatch domain adaptaion: prize-winning solution for both tracks of visda 2019 challenge. arXiv preprint arXiv:1910.03903. Cited by: §4.1.
-  (2010) Adapting visual category models to new domains. In Proc. ECCV, Cited by: §4.1.
-  (2019) Semi-supervised domain adaptation via minimax entropy. In Proc. ICCV, Cited by: §4.1, §4.1, §4.2, Table 5, Table 6.
-  (2020) Universal domain adaptation through self supervision. arXiv preprint arXiv:2002.07953. Cited by: §2.3, §5.
-  (2018) Generate to adapt: aligning domains using generative adversarial networks. In Proc. CVPR, Cited by: §1, §3.
-  (2018) Transductive semi-supervised deep learning using min-max features. In Proc. ECCV, Cited by: §2.2.
-  (2018) A dirt-t approach to unsupervised domain adaptation. In Proc. ICLR, Cited by: §2.1.
-  (2015) Very deep convolutional networks for large-scale image recognition. In Proc. ICLR, Cited by: Table 6.
-  (2015) End-to-end memory networks. In Proc. NeurIPS, Cited by: §2.3.
-  (2016) Rethinking the inception architecture for computer vision. In Proc. CVPR, Cited by: §3.2.
-  (2016) Learning the roots of visual domain shift. In Proc. ECCV, Cited by: §1.
-  (2018) Learning to adapt structured output space for semantic segmentation. In Proc. CVPR, Cited by: §1, §5.
-  (2017) Adversarial discriminative domain adaptation. In Proc. CVPR, Cited by: §2.1, §3.
-  (2014) Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474. Cited by: §2.1.
-  (2017) Deep hashing network for unsupervised domain adaptation. In Proc. CVPR, Cited by: §4.1.
-  (2020) A survey of unsupervised deep domain adaptation. ACM Trans. Intell. Syst. Technol. 11 (5). Cited by: §2.
-  (2018) Unsupervised feature learning via non-parametric instance discrimination. In Proc. CVPR, Cited by: §2.3.
-  (2019) Larger norm more transferable: an adaptive feature norm approach for unsupervised domain adaptation. In Proc. ICCV, Cited by: §4.1, §4.2, Table 1, Table 2, Table 3.
-  (2019) Universal domain adaptation. In Proc. CVPR, Cited by: §5.
-  (2019) On learning invariant representations for domain adaptation. In Proc. ICML, Cited by: §1.
-  (2019) Invariance matters: exemplar memory for domain adaptive person re-identification. In Proc. CVPR, Cited by: §2.3.
-  (2005) Semi-supervised learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §2.2, §3.1.
-  (2019) Confidence regularized self-training. In Proc. ICCV, Cited by: §4.1, §4.2, Table 1, Table 2.