Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer

12/14/2020 ∙ by Jian Liang, et al. ∙ National University of Singapore 0

Unsupervised domain adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain. Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns. This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to, the source data. To effectively utilize the source model for adaptation, we propose a novel approach called Source HypOthesis Transfer (SHOT), which learns the feature extraction module for the target domain by fitting the target data features to the frozen source classification module (representing classification hypothesis). Specifically, SHOT exploits both information maximization and self-supervised learning for the feature extraction module learning to ensure the target features are implicitly aligned with the features of unseen source data via the same hypothesis. Furthermore, we propose a new labeling transfer strategy, which separates the target data into two splits based on the confidence of predictions (labeling information), and then employ semi-supervised learning to improve the accuracy of less-confident predictions in the target domain. We denote labeling transfer as SHOT++ if the predictions are obtained by SHOT. Extensive experiments on both digit classification and object recognition tasks show that SHOT and SHOT++ achieve results surpassing or comparable to the state-of-the-arts, demonstrating the effectiveness of our approaches for various visual domain adaptation problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 13

Code Repositories

SHOT

code released for our ICML 2020 paper "Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks have achieved remarkable success in a variety of applications across different fields but at the expense of laborious large-scale training data annotation. To avoid expensive data labeling, transfer learning

[68, 19, 90] is developed to extract the knowledge from one or more source tasks which is then applied to a target task. As a typical example, unsupervised domain adaptation (UDA) tackles the problem setting where the learning task in the source domain is sufficiently similar or the same as that in the target domain but labeled data are only available in the source domain during training. Recently, UDA methods have been widely applied to boost performance of many tasks like object recognition [58, 92, 83, 19], semantic segmentation [112, 37, 114, 90], sentiment classification [29, 71], object detection [14, 52], and person re-identification [22, 96]

. Existing UDA methods mainly follow two paradigms to mitigate the gap between source and target domains. The first paradigm matches the statistical moments of different feature distributions at different orders to minimize the distributional divergence between domains

[88, 106, 72]. For example, the widely used Maximum Mean Discrepancy (MMD) [34] measure minimizes the distance between weighted sums of all moments from the source and target domains. The second paradigm applies adversarial learning [31]

with an additional domain classifier to minimize the Proxy

-distance [1] between the domains. All these methods require to access the source data during learning to adapt the model to the target domain.

However, nowadays the data often involves user private information, e.g., those on personal phones or from hospital records. Recently, several data protection frameworks have been proclaimed by the European Union (EU) and some governments, among which the General Data Protection Regulation (GDPR), as a typical example, highlights the safety issue of data transfer. Accordingly, it may violate the data privacy policy for previous UDA methods to access the source data during learning to adapt. To alleviate this issue in the transfer learning field, Hypothesis Transfer Learning (HTL) [44] explore to retain prior knowledge in a form of hypotheses instead of training data inherited from previous tasks. Likewise, in this paper, we introduce a realistic but challenging source data-absent UDA setting [55] with only a well-trained source model provided as supervision. Different from HTL, here we do not have any labeled data in the target domain for the UDA problem. Our introduced setting also differs from vanilla UDA in that the source model instead of the source data is provided to the target domain for adaptation, making the cross-domain feature-level distribution matching challenging.

To address this UDA setting, we propose a novel approach called Source HypOthesis Transfer (SHOT). SHOT follows common deep UDA methods [26, 58] to utilize an identical network architecture for different domains, consisting of a feature encoding module and a classification module (hypothesis). Like [92, 11]

, SHOT aims to learn a target-specific feature encoding module to generate target data representations that are well aligned with source data representations, but without accessing the source data or the target data labels. Intuitively, if the learned target data representations are aligned with the source ones, their classification results from the fixed source classifier (hypothesis) would be highly confident for a certain class, i.e., the classification outputs being close to one-hot vectors. We are then motivated to make SHOT adapt the feature encoding module by fine-tuning the source feature encoding module while freezing the source hypothesis, to maximize the mutual information between intermediate feature representations and outputs of the classifier, since information maximization

[85, 38] can encourage the classifier to assign disparate one-hot outputs to different target feature representations.

Though target feature representations are encouraged to fit the source hypothesis via information maximization, some semantically wrong matching between target feature representations and source hypothesis may still occur, leading to wrong labels assigned to the target data. To alleviate this, we propose to fully exploit the knowledge in the unlabeled target domain by developing two new self-supervised learning schemes. First, considering pseudo labels generated by the source classifier for the target data may be noisy, we propose to attain per-class prototype representations for the target domain itself and apply the nearest prototype classifier to obtain more accurate pseudo labels as direct supervision. Secondly, inspired by RotNet [28]

that predicts the absolute rotation of a rotated image, we come up with a relative rotation prediction task to capture the image-specific self-supervision more precisely, i.e. requiring the model to estimate the relative rotation between one original image and its rotated version. The two self-supervisions are used to help discard irrelevant semantic information by exploiting the data distribution of the target domain, thus helping learn feature representations that better fit the source hypothesis. In this way, we obtain a target-specific feature encoding module with the source hypothesis as the shared classifier module across domains.

Since some low-confident predictions generated with the proposed hypothesis transfer strategy are possibly inaccurate, we further put forward a labeling transfer strategy as a following step, forming a complete two-stage framework called SHOT++ for UDA problems. Particularly, we sort the the confidence of the adapted predictions after SHOT and discover an adaptive threshold to automatically divide the whole target data into two splits, i.e., ‘easy’ split with high confidence and ‘hard’ split with low confidence. Empirically, these predictions of samples in the ‘easy’ split are reliable. Thus, we employ a popular semi-supervised learning algorithm, MixMatch [2], to enable the reliable labeling information from the ‘easy’ split to flow to the ‘hard’ split in the target domain itself. It is worth noting that such a labeling transfer strategy can also be applied to the original source model, or even a black-box predictor without knowing the network architecture.

Experimental results on multiple benchmark datasets clearly demonstrate the proposed SHOT and SHOT++ obtain competitive results with the state-of-the-art, or outperform the state-of-the-art for three different UDA cases, i.e., closed-set [78], partial-set [4], multi-source [72] problems. The superior results over prior arts in a semi-supervised domain adaptation (SSDA) scenario [79] further verify the versatility of the proposed methods. The main contributions of this work are summarized as follows.

  • We propose a novel framework, Source HypOthesis Transfer (SHOT), for unsupervised domain adaptation with only the source model provided, which is appealing for privacy protection without access to the source data.

  • SHOT exploits information maximization to learn a target-specific feature encoding module, which provides an implicit perspective on feature alignment.

  • SHOT further exploits the knowledge in the unlabeled target domain by developing two new kinds of self-supervisions as auxiliary tasks, which further improves the adaptation performance.

  • We further propose a new labeling transfer strategy by exploiting the confidence of predictions and enforcing the labeling information to flow from ‘easy’ samples to ‘hard’ samples, even allowing adaptation with a black-box source model.

  • Experiments on several benchmarks demonstrate our methods yield results comparable to or outperforming the state-of-the-arts for three unsupervised domain adaptation scenarios and even semi-supervised domain adaptation.

This paper extends our earlier work [55] in the following aspects. Within the hypothesis transfer framework developed in [55], we additionally propose one more self-supervision objective to predict the relative rotation, which facilitates the representation learning in the target domain. We also propose a new strategy named labeling transfer that only requires the labeling predictions in the target domain. Different from [55], it even allows adaptation with a black-box source model. Besides, it can be incorporated with the hypothesis transfer framework, yielding better adaptation results. We also expand the experimental evaluation by adding one more dataset for each UDA scenario (e.g., PACS [49] for multi-source UDA) and extending our methods further to semi-supervised domain adaptation. Finally, we provide a more detailed model analysis to evaluate the proposed approaches, including training stability, parameter sensitivity and qualitative study.

2 Related Work

2.1 Unsupervised Domain Adaptation

As a typical example of transfer learning [68], unsupervised domain adaptation (UDA) aims to exploit the knowledge in a different but related labeled dataset to help learn a discriminative model for the unlabeled dataset. Early UDA methods [105, 87] assume the covariate shift with the identical conditional distributions across domains and approximate the target empirical risk by estimating the weight of each source instance and re-weighting the source empirical risk. Later, most UDA methods resort to domain-invariant feature transformation [67, 60, 53] or feature space alignment [32, 25, 88] to pursue distribution alignment. However, the transferability of these shallow methods is restricted by task-specific structures [57].

Recently, deep neural networks are well explored to learn transferable representations for domain adaptation, in various visual applications like object recognition [32, 19, 59] and semantic segmentation [37, 114, 111, 90]. Based on the relationship of label spaces between source and target domains, UDA scenarios can be categorized into four cases, i.e., closed-set [78], partial-set [4], open-set [69], and universal [104]. Among them, the closed-set UDA has received the most research attention, where the source and target label spaces are assumed to be identical. Existing deep closed-set UDA methods can be roughly divided into three distinct categories: discrepancy-based, reconstruction-based, and adversarial-based. Discrepancy-based approaches minimize a divergence criterion that measures the distance between the source and target data distributions, and some favoring choices include maximum mean discrepancy (MMD) [58], high-order central moment discrepancy [106], contrastive domain discrepancy [41], and the Wasserstein metric [18]. Reconstruction-based approaches like [27] utilize reconstruction as an auxiliary task to pursue shared representations for both domains. In addition, some other reconstruction-based methods [3, 65] further seek domain-specific reconstruction and cycle consistency to improve the adaptation performance. Inspired by generative adversarial nets [31], adversarial-based approaches determine the distance between different data distributions based on binary classification performance, which in effect corresponds to the Proxy -distance or -divergence in the seminal theoretical framework [1]. Different from marginal distribution alignment using one binary domain classifier in [26]

, following methods encourage joint distribution alignment by considering multiple class-wise domain classifiers

[70] or a semantic multi-output classifier [17, 43] instead of a feature-conditional domain discriminator [59]

, respectively. There are also some other studies investigating batch normalization

[7, 95] and adversarial dropout [81, 48] within the network architecture to ensure feature invariance. Despite their efficacy, all these methods assume the target user’s access to the source domain, which is not unpractical since the source data may be private and confidential.

2.2 Hypothesis Transfer Learning

The concept of hypothesis transfer learning (HTL) is first presented by Kuzborskij and Orabona [44], also with a formal theory. Before it, there are a number of transfer learning works [101, 63, 91] that assume no explicit access to the source data and are empirically successful. Generally, HTL is an attractive and efficient framework that assumes access to a given number of source hypotheses and a small set of training samples from the target domain. However, like the famous fine-tuning strategy [103], HTL always requires at least a small set of labeled data in the target domain, limiting its applicability to the semi-supervised DA scenario. Inspired by HTL, several recent works [16, 54] assume absence of the source data and utilize the encoded information as source supervision for the UDA problem. In particular, besides target features, [16] requires predictions of target data, and [54]

requires the mean and variance per-class calculated on source features. Both methods adopt a shallow framework like HTL, which are restricted to the original feature structure. By contrast, our work fully exploits the end-to-end feature learning module, allowing more flexibility during adaptation. There are also two concurrent deep UDA methods 

[50, 73] that attempt not to access the source data during the adaptation process. Our approach differs from [50] as we do not need any additional components like a data generator or classifier within the training algorithm; [73] introduces the first federated DA setting where knowledge is transferred from the decentralized nodes to a new node without any supervision itself and proposes an adversarial-based solution to protect user privacy, but it may fail to tackle the vanilla UDA setting with only one source domain available.

2.3 Self-supervised Learning

Self-supervised learning [40] offers great feasibility for effectively utilizing unlabeled data by generating and predicting labels from these data. The self-supervised task is also known as pretext task. A typical workflow111https://cutt.ly/DfN3rFU is to train a model on one or multiple pretext tasks with unlabeled images and then fine-tune the trained model on a variety of practical downstream tasks. In addition, pretext tasks can also be jointly trained with supervised learning tasks on labeled data with shared weights like in [8, 107]

. Generally, self-supervised methods involve two aspects: pretext task and loss function. Some popular image-specific self-supervision tasks include example colorization

[109], relative position prediction [24], rotation prediction [28], solving jigsaw puzzles [66]; on the other hand, contrastive losses [35, 12] and clustering losses [9, 10] focus on the similarity of sample pairs in the representation space, which always provide better performance. Some recent studies [98, 89, 80] explore self-supervision for UDA problems and find it beneficial to accomplishing domain alignment. By contrast, this paper elegantly designs two different kinds of self-supervisions for UDA problems.

2.4 Semi-supervised Learning

When the domain shift does not exist, the UDA problem naturally becomes a well-studied semi-supervised learning problem. Many ideas originally proposed for semi-supervised learning thus can also be employed to achieve or compensate domain alignment within UDA methods. Pseudo-labeling [47]

is a simple heuristic widely used in practice, which produces ‘pseudo-labels’ for unlabeled data using the prediction function itself during the course of training. Among UDA methods,

[110] directly incorporates pseudo-labeling as a regularization term, and [59] leverages pseudo labels in the adaptation module to achieve multi-modal distribution alignment. Entropy minimization [33] is a popular strategy that encourages the network to make ‘confident’ (low-entropy) predictions for all unlabeled data, which has been exploited in many previous UDA methods [57, 100]. Other favored semi-supervised techniques like tri-training and virtual adversarial training have been used in frameworks [82, 86], respectively. Recently, [77] directly employs MixMatch [2] and obtains promising results in the VisDA-2019 challenge. Different from prior works that treat the whole target domain as an unlabeled dataset, we focus on intra-domain semi-supervised learning where the labeled dataset consists of confident target data samples and the unlabeled dataset consists of remaining samples.

3 Method

We aim to address the UDA problem with only a pre-trained source model, not requiring to access the source data. In particular, we consider the -way visual classification task. For a vanilla UDA task, we are given labeled samples from the source domain where , , and also unlabeled samples from the target domain where . The goal of UDA is to predict the labels in the target domain, where , and the source task is assumed to be the same with the target task . In this work, we aim to learn a target function and infer , with only and the source function available.

We address the above source data-absent UDA problem through the following steps. First, we train the classification model, consisting of a feature encoding module and a hypothesis module, from the source data and then transfer the source model to the target domain without accessing the source data. Then, we present a novel framework, Source HypOthesis Transfer (SHOT), to learn the target-specific feature encoding module using self-supervised learning and semi-supervised learning, with the source hypothesis fixed. Finally, using the predictions for the target domain, we further employ a semi-supervised learning algorithm to enforce labeling information propagation from confidently labeled target samples to the remaining target samples with low confidences. Applying such a labeling transfer strategy to SHOT yields SHOT++. Likewise, applying the labeling transfer strategy to ‘Source-model-only’ yields ‘Source-model-only++’, which can even deal with a black-box source model. In the following, we elaborate on each step in details.

3.1 Source Model Generation

We consider learning a deep source classification model by minimizing the following cross-entropy loss,

(1)

where denotes the -th element in the soft-max output of a -dimensional vector , and

denotes a one-hot encoding of

where is ‘1’ for the correct class and ‘0’ for the rest. To further lift the discriminability of the source model and facilitate the following target data alignment, we adopt the label smoothing technique for model training as it encourages learned feature representations to form tight and evenly separated clusters [64], which is useful for adaptation. Therefore, the source objective function is changed to

(2)

where is the smoothed label and is the smoothing parameter which is empirically set to 0.1.

3.2 Hypothesis Transfer with Information Maximization

As shown in Fig. 1, the source model parameterized by a deep neural network consists of two modules: the feature encoding module and the classifier module , i.e., , where is the dimension of the input feature. Most previous UDA methods align different domains by matching the data distributions in the feature space using MMD [58] or domain adversarial alignment [26]. However, both strategies assume the source and target domains share the same feature encoder and need to access the source data during adaptation. This is not applicable in the tackled UDA setting here. By contrast, Adversarial Discriminative Domain Adaptation (ADDA) [92] relaxes the parameter-sharing constraint and is a new adversarial framework, which learns different mapping functions for the two domains. Also, Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) [86] first trains a parameter-sharing UDA framework as initialization and then fine-tunes the whole network by minimizing the cluster assumption violation via entropy minimization and virtual adversarial training. Both methods suggest that learning a domain-specific feature encoding module for is practicable and even works better than the parameter-sharing mechanism, which has also been proven effective in Domain-Specific Batch Normalization (DSBN) [11].

Fig. 1: The pipeline of hypothesis transfer with information maximization. The source model consists of a feature encoding module and a classifier module (hypothesis). SHOT keeps the hypothesis frozen and utilizes the feature encoding module as initialization for target domain learning.

We therefore develop a new framework termed Source HypOthesis Transfer (SHOT) by learning the domain-specific feature encoding module for the target data while fixing the source classifier module (hypothesis), as the source hypothesis encodes the distribution information of the unseen source data. Namely, SHOT utilizes the same classifier module for different domain-specific feature encoding modules. It aims to learn the optimal target feature encoding module such that the output target features can fit the source feature distribution well and can be accurately classified by the source hypothesis directly. Note that SHOT merely utilizes the source data for just once to generate the source hypothesis, and does not need to access the source data any more, unlike prior methods (e.g., ADDA, DIRT-T, and DSBN).

Essentially, we expect to learn the optimal target feature encoder so that the target data distribution matches the source data distribution well. However, feature-level alignment does not work at all since it is impossible to estimate the distribution of without access to the source data. We view the challenging problem from another perspective: if there is no domain gap, what kind of outputs should be generated over the unlabeled target data? We argue the ideal outputs of target features should be similar to those of source features with the classifier shared for both domains. Since we train the source feature encoding module and classifier module via a supervised learning loss, the output of each source feature is fairly similar to one of the one-hot encodings. Therefore, we expect that the output of each target feature through is also similar to one of the one-hot encodings. Such an output alignment requirement is a necessary condition for feature alignment.

For this purpose, we adopt the information maximization (IM) loss [42, 85, 38] to make the classification outputs of target features individually certain and globally diverse. In practice, we minimize the following and that together constitute the IM loss ():

(3)

where is the -dimensional output of each target sample, is a -dimensional vector with all ones, and is the mean output embedding of the whole target domain. The IM loss would work better than conditional entropy minimization [33] widely used in prior UDA methods [94, 79] since IM can circumvent the trivial solution where all unlabeled data have the same one-hot encoding via the fair diversity-promoting objective . For convenience, we denote SHOT with the information maximization loss as SHOT-IM.

(a) Source model only (b) SHOT-IM
Fig. 2: The t-SNE visualizations for a 5-way classification task. ‘’ in dark colors denotes unseen source data and ‘’ in light colors denotes target data. Different colors represent different classes. Best viewed in colors.

3.3 Hypothesis Transfer with Self-supervised Learning

Fig. 2 shows the t-SNE visualizations of features for a 5-way classification task learned by SHOT-IM and the ‘source model only’ method. Intuitively, the target feature representations are distributed in a mess for the ‘source model only’ method in Fig. 2(a), and using the IM loss indeed helps align the target data with the unseen source data well. However, the target data may be matched to the wrong source hypothesis to some extent in Fig. 2(b).

We argue that the harmful effects result from the inaccurate original network outputs. For instance, a target sample from the second class with the normalized network output [0.4, 0.3, 0.1, 0.1, 0.1] may be forced to have an expected output [1.0, 0.0, 0.0, 0.0, 0.0]. To alleviate such effects, we resort to self-supervised learning to exploit the knowledge in the unlabeled target domain to help learn structure-aware representations. Specifically, we develop two new self-supervisions as auxiliary tasks to be jointly trained with the main unsupervised task in Eq. (3) in a similar manner to prior methods [107, 89]. We first exploit self-supervision from the perspective of the loss function and design a novel self-supervised pseudo-labeling strategy. Different from pseudo-labeling [47] where pseudo labels conventionally generated by source hypotheses are still noisy due to domain shift, our self-supervised version considers the structure of the target domain and is able to provide accurate pseudo labels. The detailed learning procedure is provided in the following.

Fig. 3: The pipeline of hypothesis transfer with self-supervised learning. Besides the common target model, we impose a rotation classifier after the feature encoding module . is parameterized by a linear classifier, which aims to predict the relative rotation of a target sample.
  • We first attain prototype representation (centriods) for each class in the target domain, similar to weighted k-means clustering,

    (4)

    where denotes the -th element in the soft-max output and denotes the previously learned target hypothesis. These centroids can robustly and more reliably characterize the distribution of different categories within the target domain.

  • We then obtain new pseudo labels via the nearest centroid classifier:

    (5)

    where measures the distance between and . We use the cosine distance by default.

  • Finally, we compute the target centroids based on the new pseudo labels:

    (6)

We term as self-supervised pseudo labels since they are generated by the centroids obtained in an unsupervised manner. Actually, this solution to pseudo labels behaves like that in Minimum Centroid Shift (MCS) [54] where target-specific centroids and pseudo labels are alternately updated via optimizing the intra-class divergence minimization loss. In contrast, we employ the cross-entropy loss and just update the centroids and labels in Eq. (6) for one round since it is experimentally verified updating once gives sufficiently good pseudo labels. We provide the cross-entropy loss of self-supervised pseudo-labeling below,

(7)

where is a regularization parameter for the trade-off between and the main task in Eq. (3).

Also, we investigate the image-specific self-supervision in the unlabeled target domain. Rotation prediction in RotNet [28] is a favoring criterion in the self-supervised learning field, which aims to recognize one of four different 2d rotation (i.e., , , , and ) that is applied to the image that it gets as input. However, absolute rotation prediction is sensitive to some classification tasks. For example, in a main task aiming to distinguish digit ‘6’ from digit ‘9’, it is hard to determine which rotation category ‘9’ belongs to, since ‘9’ could also be a rotated ‘6’ with 180 degrees or a rotated ‘9’ with 0 degrees. To resolve this dilemma, we propose a new self-supervised learning task by predicting the relative rotation of each image pair. As shown in Fig. 3, the relative rotation predictor is represented by that takes the concatenated features of an image pair as input and maps them to one of four different rotation degrees.

For an image in the target domain , we first randomly sample an integral number from which corresponds to the rotation degree pool [, , , ]. Then we obtain the transformed image by rotating with the associated degree

. Finally, the probability score of the

-th relative rotation degree predicted by is given by

(8)

where denotes the -th element in the soft-max output vector. Therefore, the self-supervised rotation prediction loss is defined as

(9)

where is a regularization parameter for the trade-off between and the main loss, i.e. Eq. (3).

We provide an illustrative example of the complete hypothesis transfer framework in Fig. 3. To summarize, given the source model and pseudo labels generated in Eq. (6) and randomly generated rotation labels as above, SHOT freezes the hypothesis from the source via and learns the feature encoding module with the full optimization objective as

(10)

3.4 Labeling Transfer with Semi-supervised Learning

After we obtain the predictions for all the samples in the target domain via SHOT in Eq. (10), we can measure the confidence scores of these predictions via the entropy function , where is a probability prediction vector. Observing the distribution of confidence scores, we find that there always exist some less confident (high-entropy) predictions that are possibly inaccurate. Fortunately, we can utilize the reliable labeling information from high confident predictions to improve the accuracy of these less confident ones. To this end, we propose a two-step method to enforce the information propagation from low-entropy predictions to high-entropy ones. In the first step, we divide the target domain into two splits according to the confidence scores and treat these two splits as a labeled subset and an unlabeled subset, respectively. In the second step, we readily employ a semi-supervised learning algorithm to learn the enhanced predictions for the unlabeled set here.

Fig. 4: The pipeline of the labeling transfer strategy with semi-supervised learning. Both the feature encoding module and the classification module are learned via the MixMatch [2] algorithm.

Regarding the choice of a semi-supervised learning algorithm in the second step, we simply adopt a popular and well-performing approach, MixMatch [2]. The key point lies in how to divide the target domain into two splits. With average entropy, we first obtain the proportion of the labeled subset in the entire target domain by automatically computing

(11)

where denotes the entropy values of all the predictions in the target domain, where . Then for each class , we put the index with entropy values among the top smallest into the index pool of labeled split, where

(12)

and is the predicted label by SHOT in Eq. (10). In this manner, we get the labeled split, and the remaining samples constitute the unlabeled split. We call this strategy in Fig. 4 as labeling transfer since in this stage we only need the labeling information (predictions) while the feature encoding module is initialized with that learned in Eq. (10). Besides, the classification module is newly initialized from scratch and not frozen any more. So far, we develop a two-stage approach, called SHOT++, in which the first stage is SHOT in Eq. (10) and the second stage is the proposed labeling transfer strategy in Fig. 4.

3.5 Extension to Multi-source Domain Adaptation

We also provide an extension of the proposed SHOT approach for multi-source domain adaptation (MSDA) [72]. For simplicity, we run SHOT and SHOT-IM on each source-target pair and then sum up the probabilistic scores obtained from each pair. Finally, we get the predictions of samples in the target domain via the argmax operation. As for labeling transfer, we split the target domain into two pieces for each pair, and learn the independent prediction scores.

3.6 Extension to Partial-set Domain Adaptation

We also provide an extension of the proposed SHOT approach for partial-set domain adaptation (PDA) [5]. Looking at the diversity-promoting term in Eq. (10), it encourages the target domain to own a uniform label distribution. Though seemingly reasonable for solving closed-set UDA, it is not suitable for PDA. In reality, the target domain only contains some classes of all the classes in the source domain, making the label distribution sparse. Hence, we drop the second term for PDA by letting .

Besides, within the self-supervised pseudo-labeling strategy, we usually need to obtain centroids in the target domain. However, for the PDA task, there are some tiny centroids which should be considered as empty like in k-means clustering. Therefore, SHOT discards tiny centroids with size smaller than in Eq. (6) for PDA problems.

3.7 Extension to Semi-supervised Domain Adaptation

We further extend the proposed SHOT approach for semi-supervised domain adaptation (SSDA) [79]. SSDA differs from UDA in that some labeled data exist in the target domain. Therefore, we adopt the supervised training loss in Eq. (2) for labeled target data and the complete loss in Eq. (10) for unlabeled target data. Besides, we also consider the labeled target data when computing the target-specific centriods. As for labeling transfer, we split the unlabeled target domain into two pieces and then add the labeled data into the labeled split.

3.8 Network Architecture

Here we discuss some architecture choices for the neural network model to parameterize both the feature encoding module and the hypothesis. First, we need to look back at the expected network outputs for cross-entropy loss in Eq. (1). If , then maximizing means minimizing the distance between and , where is the -th weight vector in the last FC layer. Ideally, all the samples from the -th class would have a feature embedding near to . If unlabeled target samples are given the correct pseudo labels, it is easily understandable that source feature embeddings are similar to target ones via the pseudo-labeling term in Eq. (7). The intuition behind is quite similar to previous studies [60, 97] where a simplified MMD is exploited for multi-modal domain confusion. Since the weight norm matters in the inner distance within the soft-max output, we adopt weight normalization (WN) [84] to keep the norm of each weight vector the same in the FC classifier layer. Besides, as indicated in prior studies, batch normalization (BN) [39] can reduce the internal dataset shift since different domains share the same mean (zero) and variance which can be considered as first-order and second-order moments. Based on these considerations, we form the frameworks of SHOT and SHOT++ as shown in Figs. 14.

4 Experiments

4.1 Setup

To testify their versatility, we evaluate our methods in three unsupervised DA scenarios (i.e. closed-set, partial-set, multi-source), and one semi-supervised DA scenario over several popular visual benchmarks as introduced below.

Digits is a widely used DA benchmark that focuses on digit recognition. We follow the protocol of [59]

and utilize three representative subsets: SVHN (

S

), MNIST (

M), and USPS (U). We train our model using the training sets of each domain and report the recognition results on the standard test set of the target domain.

Office [78] is a standard DA benchmark which contains three domains, i.e., Amazon (A), DSLR (D), and Webcam (W), and each domain includes 31 object classes in the office environment. Gong et al. [30] further extract 10 shared categories between Office and Caltech-256 (C) to form a new benchmark named Office-Caltech. Both Office and Office-Caltech are considered small-sized.

Office-Home [93] is a challenging medium-sized benchmark, which consists of four distinct domains, i.e., Artistic images (Ar), Clip Art (Cl), Product images (Pr), and Real-World images (Rw). There are totally 65 everyday object categories in each domain.

VisDA-C [74] is a challenging large-scale benchmark that mainly focuses on the 12-class synthesis-to-real object recognition task. The source domain contains 152 thousand synthetic (S) images generated by rendering 3D models while the target domain has 55 thousand real (R

) object images sampled from Microsoft COCO.

PACS [49] is a popular benchmark for multi-source domain adaptation. It contains four different domains, i.e., Art painting (A), Cartoon (C), Photo (P), and Sketch (S). There are totally 7 common categories in each domain.

Baseline methods. For vanilla unsupervised DA in digit recognition, we compare SHOT with ADDA [92], ADR [81], CDAN [59], CyCADA [37], CAT [23], SWD [46] and STAR [61]; for object recognition, we compare ours with DANN [26], DAN [58], SAFN [100], BSP [13], MDD [113], TransNorm [95], DSBN [11], BNM [20] and GVB-GD [21]. For partial-set DA tasks, we compare ours with IWAN [108], SAN [5], ETN [6], DRCN [51], RTNet [15], BAUS [56], and TSCDA [76]. For multi-source UDA, we compare ours with DCTN [99], MCD [83], WBN [62], MSDA- [72], CMSS [102], and CMSS [102]. For SSDA, we mainly compare our methods with MME [79] and UODA [75]. Note that results are directly cited from published papers if we follow the same setting. ‘Source-model-only’ (also called ‘src-only’) denotes using the entire model learned from the source domain for target label prediction. ‘labeled-data-only’ denotes using labeled target data only when learning the feature extractor . SHOT-IM is a special case of SHOT, where both self-supervised losses are ignored by letting in Eq. (10).

4.2 Implementation Details

Network architecture. For the digit recognition task, we use the same architectures with CDAN [59], namely, the classical LeNet-5 [45] network for USPSMNIST and a variant of LeNet for SVHNMNIST. More network details can be found in Appendix A of [55]. For the object recognition task, we employ the pre-trained ResNet-50 or ResNet-101 [36] models as the backbone, like [59, 23, 100, 72]. Following [26], we replace the original FC layer with a bottleneck layer (256 units) and a task-specific FC classifier layer in Fig. 1. Precisely, a BN layer is put after FC inside the bottleneck layer and a weight normalization layer is utilized in the task-specific FC layer.

Network hyper-parameters. We train the whole network through back-propagation, and the newly added layers are trained with a learning rate 10 times that of the pre-trained layers (backbone shown in Fig. 1). Concretely, we adopt mini-batch SGD with momentum 0.9, weight decay 1e and learning rate for the new layers and those layers learned from scratch for all experiments except for VisDA-C. We further adopt the same learning rate scheduler as [26, 59], where is the training progress changing from 0 to 1. Besides, we set the batch size to 64 for all the tasks. We utilize for all experiments except for Digits in Table I and SSDA in Table VIII. Concerning the labeling transfer strategy, only ‘source-model-only++’ for object recognition does not use the learned source model as initialization.

For Digits, we train the best source hypothesis using the test set of the source dataset as validation. For other datasets without train-validation splits, we randomly specify a 0.9/ 0.1

split in the source dataset and generate the best source hypothesis based on the validation split. The maximum number of epochs for

Digits, Office, Office-Home, VisDA-C and Office-Caltech is empirically set as 30, 100, 50, 10, and 100, respectively. For learning in the target domain, we update the pseudo-labels epoch by epoch, and the maximum number of epochs is empirically set as 15. Regarding the second step in Section 3.4, we adopt the same learning setting as that of training SHOT and the default parameters within MixMatch [2]. We utilize only for Digits. Besides, we randomly run our methods for three times with different random seeds via PyTorch, and report the mean accuracy. Note that we do not use any target augmentation such as the ten-crop ensemble [59] for evaluation.

Method (Source Target) S M U M M U Avg. Source only [37] 67.10.6 69.63.8 82.20.8 73.0 ADDA [92] 76.01.8 90.10.8 89.40.2 85.2 ADR [81] 95.01.9 93.11.3 93.22.5 93.8 CyCADA [37] 90.40.4 96.50.1 95.60.4 94.2 CDAN [59] 89.2 98.0 95.6 94.3 rRevGrad+CAT [23] 98.80.0 96.00.9 94.00.7 96.3 SWD [46] 98.90.1 97.10.1 98.10.1 98.0 STAR [61] 98.80.1 97.70.1 97.80.1 98.1 Source-model-only 67.10.9 87.82.3 89.60.4 81.5 SHOT-IM 91.57.2 97.00.5 92.82.9 93.7 SHOT 98.90.0 98.40.6 98.10.2 98.4 Source-model-only++ 87.32.9 95.31.9 98.10.1 93.6 SHOT-IM++ 93.86.4 93.00.8 97.60.2 96.1 SHOT++ 98.90.1 98.50.8 98.50.1 98.6 Target-supervised (Oracle) 99.40.0 99.40.0 98.00.1 98.9

TABLE I: Classification accuracies (%) on Digits dataset for vanilla closed-set UDA. S: SVHN, M:MNIST, U: USPS. (Best value is in red color)

4.3 Results of Digit Recognition (Vanilla Closed-set)

For digit recognition, we evaluate our methods on three popular closed-set unsupervised domain adaptation tasks, i.e., SVHNMNIST, USPSMNIST, and MNISTUSPS. The classification accuracies of our methods and prior work are reported in Table I. Obviously, SHOT obtains the best mean accuracy for each task and also outperforms prior work in terms of the average accuracy. Compared with the baseline method source-model-only, SHOT-IM always achieves better results, and SHOT performs better than SHOT-IM due to the contribution of self-supervised learning in the target domain. Taking into consideration the labeling transfer strategy, all three methods are able to obtain enhanced classification results, indicating the effectiveness of intra-domain semi-supervised learning. It is also worth noting that SHOT++ even offers superior performance to the target-supervised result in MNISTUSPS. This may be because MNIST is much larger than USPS, which alleviates the domain shift well.

Method (SourceTarget) AD AW DA DW WA WD Avg. ResNet-50 [36] 68.9 68.4 62.5 96.7 60.7 99.3 76.1 DAN [58] 78.6 80.5 63.6 97.1 62.8 99.6 80.4 DANN [26] 79.7 82.0 68.2 96.9 67.4 99.1 82.2 SAFN+ENT [100] 90.7 90.1 73.0 98.6 70.2 99.8 87.1 rRevGrad+CAT [23] 90.8 94.4 72.2 98.0 70.2 100. 87.6 CDAN [59] 92.9 94.1 71.0 98.6 69.3 100. 87.7 DSBN+MSTN [11] 92.2 92.7 71.7 99.0 74.4 100. 88.3 CDAN+BSP [13] 93.0 93.3 73.6 98.2 72.6 100. 88.5 CDAN+BNM [20] 92.9 92.8 73.5 98.8 73.8 100. 88.6 MDD [113] 93.5 94.5 74.6 98.4 72.2 100. 88.9 CDAN+TransNorm [95] 94.0 95.7 73.4 98.7 74.2 100. 89.3 GVB-GD [21] 95.0 94.8 73.4 98.7 73.7 100. 89.3 Source-model-only 80.8 76.9 60.3 95.3 63.6 98.7 79.3 SHOT-IM 90.6 91.2 72.4 98.3 71.3 99.9 87.3 SHOT 93.5 90.0 75.5 98.7 75.1 99.9 88.8 Source-model-only++ 89.2 86.0 68.8 96.4 70.3 96.7 84.6 SHOT-IM++ 91.4 92.0 73.6 98.6 72.0 99.9 87.9 SHOT++ 94.5 90.9 76.3 98.6 75.8 99.9 89.3

TABLE II: Classification accuracies (%) on small-sized Office dataset for vanilla closed-set UDA (ResNet-50).

Method (SourceTarget) ArCl ArPr ArRe ClAr ClPr ClRe PrAr PrCl PrRe ReAr ReCl RePr Avg. ResNet-50 [36] 34.9 50.0 58.0 37.4 41.9 46.2 38.5 31.2 60.4 53.9 41.2 59.9 46.1 DANN [26] 45.6 59.3 70.1 47.0 58.5 60.9 46.1 43.7 68.5 63.2 51.8 76.8 57.6 DAN [58] 43.6 57.0 67.9 45.8 56.5 60.4 44.0 43.6 67.7 63.1 51.5 74.3 56.3 CDAN [59] 50.7 70.6 76.0 57.6 70.0 70.0 57.4 50.9 77.3 70.9 56.7 81.6 65.8 CDAN+BSP [13] 52.0 68.6 76.1 58.0 70.3 70.2 58.6 50.2 77.6 72.2 59.3 81.9 66.3 SAFN [100] 52.0 71.7 76.3 64.2 69.9 71.9 63.7 51.4 77.1 70.9 57.1 81.5 67.3 CDAN+TransNorm [95] 50.2 71.4 77.4 59.3 72.7 73.1 61.0 53.1 79.5 71.9 59.0 82.9 67.6 MDD [113] 54.9 73.7 77.8 60.0 71.4 71.8 61.2 53.6 78.1 72.5 60.2 82.3 68.1 CDAN+BNM [20] 56.2 73.7 79.0 63.1 73.6 74.0 62.4 54.8 80.7 72.4 58.9 83.5 69.4 GVB-GD [21] 57.0 74.7 79.8 64.6 74.1 74.6 65.2 55.1 81.0 74.6 59.7 84.3 70.4 Source-model-only 44.6 67.3 74.8 52.7 62.7 64.8 53.0 40.6 73.2 65.3 45.4 78.0 60.2 SHOT-IM 55.2 76.7 80.4 66.9 74.4 75.4 65.5 54.8 80.8 73.7 58.4 83.4 70.5 SHOT 57.3 78.5 81.4 67.9 78.5 78.0 68.1 56.1 82.1 73.4 59.6 84.4 72.1 Source-model-only++ 48.6 75.4 79.9 62.4 74.8 75.0 58.4 43.4 80.1 67.4 44.9 82.6 66.1 SHOT-IM++ 56.3 77.6 81.3 67.7 75.5 76.7 66.4 55.7 81.7 74.1 59.0 84.5 71.4 SHOT++ 58.1 79.5 82.4 68.6 79.9 79.3 68.6 57.2 83.0 74.3 60.4 85.1 73.0

TABLE III: Classification accuracies (%) on medium-sized Office-Home dataset for vanilla closed-set UDA (ResNet-50).

Method (Synthesis Real) plane bcycl bus car horse knife mcycl person plant sktbrd train truck Per-class ResNet-101 [36] 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81.0 26.5 73.5 8.5 52.4 DANN [26] 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4 DAN [58] 87.1 63.0 76.5 42.0 90.3 42.9 85.9 53.1 49.7 36.3 85.8 20.7 61.1 ADR [81] 94.2 48.5 84.0 72.9 90.1 74.2 92.6 72.5 80.8 61.8 82.2 28.8 73.5 CDAN [59] 85.2 66.9 83.0 50.8 84.2 74.9 88.1 74.5 83.4 76.0 81.9 38.0 73.9 CDAN+BSP [13] 92.4 61.0 81.0 57.5 89.0 80.6 90.1 77.0 84.2 77.9 82.1 38.4 75.9 SAFN [100] 93.6 61.3 84.1 70.6 94.1 79.0 91.8 79.6 89.9 55.6 89.0 24.4 76.1 SWD [46] 90.8 82.5 81.7 70.5 91.7 69.5 86.3 77.5 87.4 63.6 85.6 29.2 76.4 DSBN+MSTN [11] 94.7 86.7 76.0 72.0 95.2 75.1 87.9 81.3 91.1 68.9 88.3 45.5 80.2 STAR [61] 95.0 84.0 84.6 73.0 91.6 91.8 85.9 78.4 94.4 84.7 87.0 42.2 82.7 Source-model-only 60.9 21.6 50.9 67.6 65.8 6.3 82.2 23.2 57.3 30.6 84.6 8.0 46.6 SHOT-IM 93.7 86.4 78.6 50.7 91.0 93.4 79.0 78.3 89.2 85.3 87.9 51.1 80.4 SHOT 95.9 88.4 87.3 73.5 95.2 96.4 87.9 84.5 92.5 89.2 85.8 49.4 85.5 Source-model-only++ 63.9 8.1 70.7 91.9 93.0 0.7 93.6 43.4 79.8 50.3 91.3 2.0 57.4 SHOT-IM++ 96.9 88.8 88.9 70.5 96.2 98.7 90.4 79.7 95.5 88.1 92.9 34.6 85.1 SHOT++ 95.8 88.3 90.5 84.7 97.9 98.0 92.9 85.3 97.5 92.9 93.9 32.3 87.5

TABLE IV: Classification accuracies (%) on large-scale VisDA-C dataset for vanilla closed-set UDA (ResNet-101).

4.4 Results of Object Recognition (Vanilla Closed-set)

Next, we evaluate our methods on object recognition benchmarks including Office, Office-Home and VisDA-C under the vanilla closed-set DA setting. As shown in Table II, SHOT performs the best for two challenging tasks, DA and WA, and obtains an average accuracy 88.8% that is competitive to two state-of-the-art methods, MDD [113] and BNM [20]. Similar to the observations in Table I, the labeling transfer strategy is beneficial to cross-domain object recognition, and SHOT++ obtains the same mean accuracy as previous state-of-the-art methods, TransNorm [95] and GVB-GD [21]. This may be because SHOT needs a relatively large target domain to learn the target-specific module while and as the target domain are not big enough. Generally, SHOT obtains competitive performance even with no direct access to the source domain data.

As expected, on the medium-sized Office-Home dataset, our method SHOT++ significantly outperforms previously published state-of-the-art approaches, advancing the average accuracy from 70.4% in GVB-GD [21] to 73.0% in Table III. Besides, SHOT++ performs the best among 11 out of 12 separate tasks. For the transfer task ReAr, SHOT++ gets the second-best result 74.3% that is only lower than the best result 74.6% of GVB-GD. Generally, the hypothesis transfer strategy works well enough, seen from the outperforming results of SHOT over prior methods, and the labeling transfer strategy further lifts the avg. accuracy by nearly 1 point.

Method Per-class Method (Synthesis Real) Per-class ResNet-50 [36] 52.4 CDAN [59] 70.0 DANN [26] 57.4 CDAN+TransNorm [95] 71.4 DAN [58] 61.6 MDD [113] 74.6 MCD [83] 69.2 GVB-GD [21] 75.3 Source-model-only 41.8 Source-model-only++ 48.9 SHOT-IM () 64.3 SHOT-IM++ () 67.6 SHOT-IM 73.8 SHOT-IM++ 75.4 SHOT () 74.7 SHOT++ () 76.4 SHOT () 74.8 SHOT++ () 77.0 SHOT 76.7 SHOT++ 77.2

TABLE V: Classification accuracies (%) on large-scale VisDA-C dataset for vanilla closed-set UDA (ResNet-50).

Methods (Office-Caltech) A C D W Avg. Methods (PACS) A C P S Avg. ResNet-101 [36] 88.7 85.4 98.2 99.1 92.9 ResNet-18 [36] 74.90.9 72.10.8 94.50.6 64.71.5 76.6 DAN [58] 91.6 89.2 99.1 99.5 94.8 DANN [26] 81.91.1 77.51.3 91.81.2 74.61.0 81.5 DCTN [99] 92.7 90.2 99.0 99.4 95.3 WBN [62] 89.90.3 89.70.6 97.40.8 58.01.5 83.8 MCD [83] 92.1 91.5 99.1 99.5 95.6 MCD [83] 88.71.0 88.91.5 96.40.4 73.93.9 87.0 MSDA- [72] 94.5 92.2 99.2 99.5 96.4 MSDA- [72] 89.30.4 89.91.0 97.30.3 76.72.9 88.3 CMSS [102] 96.0 93.7 99.3 99.6 97.2 CMSS [102] 88.60.4 90.40.8 96.90.3 82.00.6 89.5 Source-model-only 95.4 93.7 98.9 98.3 96.6 Source-model-only 68.72.9 50.81.6 95.00.2 51.72.1 66.6 SHOT-IM 96.2 96.1 98.5 99.7 97.6 SHOT-IM 89.10.5 87.40.7 98.60.1 58.03.8 83.3 SHOT 96.3 96.3 98.5 99.8 97.7 SHOT 92.70.7 91.30.7 98.70.0 86.21.6 92.2 Source-model-only++ 96.1 95.8 99.2 99.8 97.7 Source-model-only++ 72.66.4 45.03.5 97.33.5 47.12.9 65.5 SHOT-IM++ 96.3 96.4 98.9 99.9 97.9 SHOT-IM++ 91.10.3 89.10.3 98.70.1 59.13.5 84.5 SHOT++ 96.3 96.4 99.4 99.8 98.0 SHOT++ 95.20.2 93.30.3 98.70.1 89.11.9 94.1

TABLE VI: Classification accuracies (%) on Office-Caltech (ResNet-101) and PACS for multi-source UDA (ResNet-18). [ denotes the rest 3 domains.]

Methods Office-Home (6525) VisDA-C (126) SourceTarget ArCl ArPr ArRe ClAr ClPr ClRe PrAr PrCl PrRe ReAr ReCl RePr Avg. RS SR Avg. ResNet-50 [36] 46.3 67.5 75.9 59.1 59.9 62.7 58.2 41.8 74.9 67.4 48.2 74.2 61.3 64.3 45.3 54.8 IWAN [108] 53.9 54.5 78.1 61.3 48.0 63.3 54.2 52.0 81.3 76.5 56.8 82.9 63.6 71.3 48.6 60.0 SAN [5] 44.4 68.7 74.6 67.5 65.0 77.8 59.8 44.7 80.1 72.2 50.2 78.7 65.3 69.7 49.9 59.8 DRCN [51] 54.0 76.4 83.0 62.1 64.5 71.0 70.8 49.8 80.5 77.5 59.1 79.9 69.0 73.2 58.2 65.7 ETN [6] 59.2 77.0 79.5 62.9 65.7 75.0 68.3 55.4 84.4 75.7 57.7 84.5 70.5 - - - SAFN [100] 58.9 76.3 81.4 70.4 73.0 77.8 72.4 55.3 80.4 75.8 60.4 79.9 71.8 - - - RTNet [15] 63.2 80.1 80.7 66.7 69.3 77.2 71.6 53.9 84.6 77.4 57.9 85.5 72.3 - - - BAUS [56] 60.6 83.2 88.4 71.8 72.8 83.4 75.5 61.6 86.5 79.3 62.8 86.1 76.0 - - - TSCDA [76] 63.6 82.5 89.6 73.7 73.9 81.4 75.4 61.6 87.9 83.6 67.2 88.8 77.4 - - - Source-model-only 45.2 70.4 81.0 56.2 60.8 66.2 60.9 40.1 76.2 70.8 48.5 77.3 62.8 65.4 38.7 52.0 SHOT-IM 57.9 83.5 88.8 72.4 74.0 78.9 76.1 60.6 90.1 81.9 68.2 88.5 76.7 74.3 63.2 67.3 SHOT 65.9 85.5 92.3 77.6 76.8 87.0 78.4 65.9 89.4 81.3 67.3 86.8 79.5 78.4 68.7 73.6 Source-model-only++ 49.3 78.0 86.4 65.7 68.4 76.4 68.3 44.0 83.6 74.8 51.6 82.2 69.1 71.7 50.2 60.9 SHOT-IM++ 58.4 84.3 89.1 73.2 74.9 79.6 76.9 61.5 91.0 82.2 69.0 89.0 77.4 74.9 67.3 71.1 SHOT++ 66.0 86.1 92.8 77.9 77.5 87.6 78.6 66.4 89.7 81.5 67.9 87.2 79.9 80.0 72.4 76.2

TABLE VII: Classification accuracies (%) on Office-Home and VisDA-C for partial-set UDA (ResNet-50).

For the large-scale synthesis-to-real VisDA-C dataset, we follow the protocol in prior works [81, 100] and employ the most favoring backbone ResNet-101 [36]. As shown in Table IV, SHOT++ achieves the best per-class accuracy and wins among 6 out of 12 tasks. Even when ignoring the second stage, namely, labeling transfer, SHOT can still obtain a promising per-class result 85.5%, higher than the prior state-of-the-art 82.7% in STAR [61]. Carefully comparing SHOT with prior work, we find that SHOT performs well even for the most challenging class ‘truck’. Besides, using the intra-domain semi-supervised learning stage via MixMatch, the per-class results are improved but the accuracy of the hard class ‘truck’ decreases. This may be because large error in the labeled split affects the final results.

Following previous works [113, 21], we further adopt the ResNet-50 [36] backbone to validate the effectiveness of our methods. Results are shown in Table V. With the hypothesis transfer strategy, SHOT beats the state-of-the-art method GVB-GD by 1.4% in terms of per-class accuracy. Benefited from the labeling transfer strategy, the per-class accuracy further grows from 76.7% (SHOT) to 77.2% (SHOT++) and again ranks the best for VisDA-C with the ResNet-50 backbone.

In Table V, we further fix three balancing parameters (i.e., ) to zero in turn and investigate the effectiveness of each component within SHOT in Eq. (10), including , , and . Firstly, the advantages of SHOT-IM over SHOT-IM () validate the effectiveness of the diversity term . Incorporated with the labeling transfer strategy, SHOT-IM++ also obtains a better per-class accuracy than its variant SHOT-IM++ (). Secondly, SHOT () performs worse than SHOT, indicating the effectiveness of the self-supervised pseudo labeling term in Eq. (7). Thirdly, SHOT () performs worse than SHOT, indicating the effectiveness of the self-supervised rotation prediction term in Eq. (9). Two latter conclusions can also be drawn by comparing SHOT () and SHOT () with SHOT-IM. Also, it seems contributes more than within SHOT. Finally, the benefits of the labeling transfer strategy are also easily validated by comparing the values in the second column with those in the fourth column.

SSDA (SourceTarget) ArCl ArPr ArRe ClAr ClPr ClRe PrAr PrCl PrRe ReAr ReCl RePr Avg. S+T [79] 37.5 63.6 69.5 51.4 65.9 64.5 52.0 37.0 71.6 61.2 39.5 75.3 57.4 DANN [26] 44.4 64.3 68.9 52.3 65.3 64.2 51.3 45.9 72.7 62.7 52.0 75.7 60.0 MME [79] 45.8 68.6 72.2 57.5 71.3 68.0 56.0 46.2 74.4 65.1 49.1 78.7 62.7 UODA [75] 43.3 72.5 73.3 59.3 72.1 70.5 58.8 45.5 75.4 66.1 49.6 79.8 63.9 labeled-data-only 40.5 66.4 69.3 52.9 67.6 65.1 52.0 38.2 70.6 61.5 42.8 75.4 58.5 SHOT-IM 48.2 72.5 74.1 58.7 73.2 71.4 57.8 46.2 76.2 64.8 50.1 80.4 64.5 SHOT 49.6 74.0 75.0 59.4 75.1 72.9 58.0 47.2 77.2 64.8 50.5 80.8 65.4 labeled-data-only++ 41.9 72.2 71.9 57.3 74.5 70.1 56.2 39.4 75.2 62.9 43.4 79.1 62.0 SHOT-IM++ 48.8 73.9 75.2 59.9 74.7 72.0 58.9 46.7 76.8 65.4 50.3 81.1 65.3 SHOT++ 50.2 74.8 75.8 60.6 76.3 73.7 58.9 47.5 77.5 65.2 51.2 81.7 66.1

TABLE VIII: Classification accuracies (%) on Office-Home dataset for semi-supervised DA (VGG16 on one-shot setting).

4.5 Results of Object Recognition beyond Vanilla UDA

Results of object recognition for MSDA. For the multi-source UDA setting, we adopt the protocol in [102] on Office-Caltech and PACS. For the two datasets, we specify a target subset and use other three subsets as three source domains, forming a multi-source UDA task. Likewise, SHOT does not access the source data but provided with multiple source models instead. The results of ours and previously published state-of-the-arts are shown in Table VI. It is clear that SHOT achieves better results than CMSS [102] in 3 out of 4 tasks on Office-Caltech and all of the 4 tasks on PACS, respectively. With the incorporation of labeling transfer, SHOT++ wins all these transfer tasks on the two datasets. Besides, the gap between SHOT and SHOT-IM is relatively small on Office-Caltech since the predictions learned by SHOT-IM are already good enough.

Results of object recognition for PDA. For the partial-set UDA setting, we follow the protocol in [51] on Office-Home and VISDA-C. In particular, there are totally 25 classes (the first 25 in the alphabetical order) out of 65 classes in the target domain for Office-Home, while the first 6 classes in the alphabetical order out of 12 classes are included in the target domain for VISDA-C. Results of our methods and previous state-of-the-art PDA methods [51, 15, 56, 76] are shown in Table VII. As explained in Section 3.6, is utilized in all of our methods here. Compared with previous methods, SHOT obtains the best average accuracy for both datasets as before. Besides, SHOT again outperforms SHOT-IM by 2.8% and 6.3% in terms of the average accuracy on two datasets, and SHOT++ further improves the average accuracy from 79.5% to 79.9% and 73.6% to 76.2%, respectively. Generally, both the hypothesis transfer strategy and the labeling transfer strategy are proven effective for the challenging PDA problem.

Results of object recognition for SSDA. For the semi-supervised domain adaptation setting, we follow the protocol in [79] on Office-Home under the one-shot setting where one labeled example per class is available in the target domain. As shown in Table VIII, SHOT outperforms UODA [75] and MME [79] in 10 out of 12 tasks and achieves the best average accuracy. Besides, SHOT is always superior to SHOT-IM, validating the effectiveness of self-supervision over the unlabeled target data. SHOT++ further improves the average accuracy from 65.4% to 66.1%, indicating the effectiveness of the labeling transfer strategy.

Methods DRCN [51] SAN [5] IWAN [108] ETN [6] Accuracy 75.3 77.8 78.1 83.2 0.2 Methods Source-only SHOT-IM SHOT () SHOT Accuracy 69.7 81.8 0.5 83.1 0.1 83.4 0.3

TABLE IX: Accuracies for ImageNetCaltech. Methods

utilize the training set of ImageNet besides pre-trained ResNet-50 model.

Special case. One may wonder whether SHOT works if we cannot train the source model by ourselves. To find the answer, we utilize the most popular off-the-shelf pre-trained ImageNet models ResNet-50 [36] and consider a special PDA task (ImageNetCaltech) to evaluate the effectiveness of SHOT with the same basic setting as [6]. Obviously, in Table IX, SHOT achieves a slightly higher mean accuracy than prior state-of-the-art ETN [6] even without access to the source data. It shows that the proposed hypothesis transfer strategy is indeed effective even without the design of model network architectures.

Fig. 5: Ablation study of batch normalization (BN) and weight normalization (WN) in the network for a 65-way classification UDA task ArCl on Office-Home. ‘source’ denotes the accuracy in the source validation test, and ‘src-only’ is short for source-model-only. Best viewed in colors.
(a) value of in Eq. (3) (b) value of in Eq. (7) (c) value of in Eq. (9) (d) Accuracy (%)
Fig. 6: Values of different loss functions and the accuracy during training for a 65-way classification UDA task ArCl on Office-Home (15 epochs).

4.6 Model Analysis and Discussions

Ablation study on network components. As discussed in Section 3.8, we utilize batch normalization (BN) and weight normalization (WN) during training the source model and learning the target feature encoder. We report the ablation study about network components in Fig. 5 to validate their contribution. First, using BN or using WN results in the decreasing accuracy over the source training set, which may be useful for generalization since we find in the second bin, high accuracy on the source test set corresponds to low accuracy on the training set. Then, the higher accuracy the ‘src-only’ method obtains, the better results SHOT and its variants achieve. Generally, both BN and WN are beneficial to domain adaptation. Besides, the improvements brought by BN are larger than those brought by WN.

(a) rotation prediction (b) target accuracy
Fig. 7: Accuracies of different variants during training for a 65-way classification UDA task ArPr on Office-Home (15 epochs).
(a) accuracy of ArCl (UDA) (b) accuracy of ArCl (PDA)
Fig. 8: Performance sensitivity of 3 parameters , , within SHOT.

Training stability. We investigate the accuracy and the values of three different objective functions within the optimization process in Fig. 6 on the UDA task ArCl. It can be easily seen that the values of and quickly decrease and converge after nearly 8 epochs. The value of rotation prediction loss also keeps decreasing but at a slow speed. As shown in Fig. 6(d), the accuracy varies following a very similar trend, i.e. growing up quickly and starting to converge after 6 epochs. Generally, the training procedure of SHOT is stable and effective.

Discussion on loss functions. To analyze the advantages of our proposed self-supervised loss functions in Eq. (7) and Eq. (9), we design one vanilla alternative for each function on the transfer task ArPr. As shown in Fig. 7(a), the proposed relative rotation prediction objective works better than the vanilla variant in terms of the rotation prediction accuracy. Besides, comparisons in terms of the semantic accuracy in Fig. 7(b) indicate that the proposed relative rotation prediction objective is also beneficial to UDA. Compared with SHOT w/ vanilla pseudo-labeling, SHOT always obtains better results along the training process, implying the superiority of the proposed self-supervised pseudo-labeling term.

Parameter sensitivity. To better understand the effects of , we test their performance sensitivity in the UDA task ArCl on Office-Home and show the results in Fig. 8(a). The accuracies around are not sensitive. Besides, we study the sensitivity of the threshold parameter for the PDA task ArCl on Office-Home in Fig. 8(b). It shows that the accuracies around are also not sensitive. Generally, the parameters within the proposed method i.e. SHOT are not sensitive.

Fig. 9: Images of the source domain, the low-entropy target split, and the high-entropy target split for ArCl on Office-Home.
(a) Source-model-only (b) SHOT-IM (c) SHOT
Fig. 10: The t-SNE feature visualizations for a 65-way classification UDA task ArCl on Office-Home. Circles in blue denote unseen source data and stars in red denote target data. Best viewed in colors.
(a) Source-model-only (b) SHOT-IM (c) SHOT
Fig. 11: The t-SNE feature visualizations for a 65-way classification UDA task ArCl on Office-Home. For a better illustration, we choose features in the first 10 classes of each domain, and different color denotes different class. Best viewed in colors.

Qualitative Study. We randomly select some samples in the source domain, the low-entropy target split, and the high-entropy target split to provide some intuitive insights about the labeling transfer strategy. Particularly, we pick up two images from three representative classes, i.e., ‘backpack’, ‘bike’, and ‘bucket’, for the UDA task ArCl on Office-Home, and show them in Fig. 9. It can be seen that the proposed strategy can well separate the easy samples from the hard samples in the target domain. Besides, the easy samples in the low-entropy target split are more trustworthy than the source samples for the hard samples in the high-entropy target split, making the proposed labeling transfer strategy understandable and effective.

Feature visualization. We provide the t-SNE visualizations 222https://lvdmaaten.github.io/tsne/ of the features learned by Source-model-only, SHOT-IM, and SHOT for the UDA task ArCl on Office-Home in Fig. 10 and Fig. 11, respectively. As expected, both SHOT-IM and SHOT help align the target features with the source features in Fig. 10. Carefully looking at the semantic labels in Fig. 11, we find that SHOT outperforms SHOT-IM by semantically aligning features from different domains.

5 Conclusion

In this paper, we have proposed a generic representation learning framework called source hypothesis transfer (SHOT) for source data-absent unsupervised domain adaptation. SHOT merely needs the well-trained source model and offers the feasibility of unsupervised domain adaptation without access to the source data which may be private or decentralized. Specifically, SHOT learns the optimal target-specific feature learning module to fit the source hypothesis by exploiting information maximization and self-supervised learning. We further present a labeling transfer strategy and apply it to enhance SHOT to SHOT++, which exploits the intra-domain information via a semi-supervised algorithm. Experiments for both digit classification and object recognition verify that SHOT and SHOT++ can achieve results comparable to or even better than the state-of-the-art for three different unsupervised domain adaptation scenarios as well as the semi-supervised domain adaptation problem. In the future, we plan to apply the proposed methods to other visual tasks like semantic segmentation [111] and object detection [14].

References

  • [1] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan (2010) A theory of learning from different domains. Machine Learning 79 (1-2), pp. 151–175. Cited by: §1, §2.1.
  • [2] D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. A. Raffel (2019) Mixmatch: a holistic approach to semi-supervised learning. In Proc. NeurIPS, pp. 5049–5059. Cited by: §1, §2.4, Fig. 4, §3.4, §4.2.
  • [3] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan (2016) Domain separation networks. In Proc. NeurIPS, pp. 343–351. Cited by: §2.1.
  • [4] Z. Cao, M. Long, J. Wang, and M. I. Jordan (2018) Partial transfer learning with selective adversarial networks. In Proc. CVPR, pp. 2724–2732. Cited by: §1, §2.1.
  • [5] Z. Cao, M. Long, J. Wang, and M. I. Jordan (2018) Partial transfer learning with selective adversarial networks. In Proc. CVPR, pp. 2724–2732. Cited by: §3.6, §4.1, TABLE VII, TABLE IX.
  • [6] Z. Cao, K. You, M. Long, J. Wang, and Q. Yang (2019) Learning to transfer examples for partial domain adaptation. In Proc. CVPR, pp. 2985–2994. Cited by: §4.1, §4.5, TABLE VII, TABLE IX.
  • [7] F. M. Cariucci, L. Porzi, B. Caputo, E. Ricci, and S. R. Bulo (2017) Autodial: automatic domain alignment layers. In Proc. ICCV, pp. 5077–5085. Cited by: §2.1.
  • [8] F. M. Carlucci, A. D’Innocente, S. Bucci, B. Caputo, and T. Tommasi (2019) Domain generalization by solving jigsaw puzzles. In Proc. CVPR, pp. 2229–2238. Cited by: §2.3.
  • [9] M. Caron, P. Bojanowski, A. Joulin, and M. Douze (2018)

    Deep clustering for unsupervised learning of visual features

    .
    In Proc. ECCV, pp. 132–149. Cited by: §2.3.
  • [10] M. Caron, P. Bojanowski, J. Mairal, and A. Joulin (2019) Unsupervised pre-training of image features on non-curated data. In Proc. ICCV, pp. 2959–2968. Cited by: §2.3.
  • [11] W. Chang, T. You, S. Seo, S. Kwak, and B. Han (2019) Domain-specific batch normalization for unsupervised domain adaptation. In Proc. CVPR, pp. 7354–7362. Cited by: §1, §3.2, §4.1, TABLE II, TABLE IV.
  • [12] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. In Proc. ICML, Cited by: §2.3.
  • [13] X. Chen, S. Wang, M. Long, and J. Wang (2019) Transferability vs. discriminability: batch spectral penalization for adversarial domain adaptation. In Proc. ICML, pp. 1081–1090. Cited by: §4.1, TABLE II, TABLE III, TABLE IV.
  • [14] Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool (2018) Domain adaptive faster r-cnn for object detection in the wild. In Proc. CVPR, pp. 3339–3348. Cited by: §1, §5.
  • [15] Z. Chen, C. Chen, Z. Cheng, B. Jiang, K. Fang, and X. Jin (2020) Selective transfer with reinforced transfer network for partial domain adaptation. In Proc. CVPR, pp. 12706–12714. Cited by: §4.1, §4.5, TABLE VII.
  • [16] B. Chidlovskii, S. Clinchant, and G. Csurka (2016) Domain adaptation in the absence of source domain data. In Proc. KDD, pp. 451–460. Cited by: §2.2.
  • [17] S. Cicek and S. Soatto (2019) Unsupervised domain adaptation via regularized conditional alignment. In Proc. ICCV, pp. 1416–1425. Cited by: §2.1.
  • [18] N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy (2017) Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (9), pp. 1853–1865. Cited by: §2.1.
  • [19] G. Csurka (2017) A comprehensive survey on domain adaptation for visual applications. In

    Domain Adaptation in Computer Vision Applications

    ,
    pp. 1–35. Cited by: §1, §2.1.
  • [20] S. Cui, S. Wang, J. Zhuo, L. Li, Q. Huang, and Q. Tian (2020) Towards discriminability and diversity: batch nuclear-norm maximization under label insufficient situations. In Proc. CVPR, pp. 3941–3950. Cited by: §4.1, §4.4, TABLE II, TABLE III.
  • [21] S. Cui, S. Wang, J. Zhuo, C. Su, Q. Huang, and Q. Tian (2020) Gradually vanishing bridge for adversarial domain adaptation. In Proc. CVPR, pp. 12455–12464. Cited by: §4.1, §4.4, §4.4, §4.4, TABLE II, TABLE III, TABLE V.
  • [22] W. Deng, L. Zheng, Q. Ye, G. Kang, Y. Yang, and J. Jiao (2018) Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In Proc. CVPR, pp. 994–1003. Cited by: §1.
  • [23] Z. Deng, Y. Luo, and J. Zhu (2019) Cluster alignment with a teacher for unsupervised domain adaptation. In Proc. ICCV, pp. 9944–9953. Cited by: §4.1, §4.2, TABLE I, TABLE II.
  • [24] C. Doersch, A. Gupta, and A. A. Efros (2015) Unsupervised visual representation learning by context prediction. In Proc. ICCV, pp. 1422–1430. Cited by: §2.3.
  • [25] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars (2013) Unsupervised visual domain adaptation using subspace alignment. In Proc. ICCV, pp. 2960–2967. Cited by: §2.1.
  • [26] Y. Ganin and V. Lempitsky (2015)

    Unsupervised domain adaptation by backpropagation

    .
    In Proc. ICML, pp. 1180–1189. Cited by: §1, §2.1, §3.2, §4.1, §4.2, §4.2, TABLE II, TABLE III, TABLE IV, TABLE V, TABLE VI, TABLE VIII.
  • [27] M. Ghifary, W. B. Kleijn, M. Zhang, D. Balduzzi, and W. Li (2016) Deep reconstruction-classification networks for unsupervised domain adaptation. In Proc. ECCV, pp. 597–613. Cited by: §2.1.
  • [28] S. Gidaris, P. Singh, and N. Komodakis (2018) Unsupervised representation learning by predicting image rotations. In Proc. ICLR, Cited by: §1, §2.3, §3.3.
  • [29] X. Glorot, A. Bordes, and Y. Bengio (2011)

    Domain adaptation for large-scale sentiment classification: a deep learning approach

    .
    In Proc. ICML, pp. 513–520. Cited by: §1.
  • [30] B. Gong, Y. Shi, F. Sha, and K. Grauman (2012) Geodesic flow kernel for unsupervised domain adaptation. In Proc. CVPR, pp. 2066–2073. Cited by: §4.1.
  • [31] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Proc. NeurIPS, pp. 2672–2680. Cited by: §1, §2.1.
  • [32] R. Gopalan, R. Li, and R. Chellappa (2013) Unsupervised adaptation across domain shifts by generating intermediate data representations. IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (11), pp. 2288–2302. Cited by: §2.1, §2.1.
  • [33] Y. Grandvalet and Y. Bengio (2005) Semi-supervised learning by entropy minimization. In Proc. NeurIPS, pp. 529–536. Cited by: §2.4, §3.2.
  • [34] A. Gretton, K. Borgwardt, M. Rasch, B. Schölkopf, and A. J. Smola (2007) A kernel method for the two-sample-problem. In Proc. NeurIPS, pp. 513–520. Cited by: §1.
  • [35] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick (2020) Momentum contrast for unsupervised visual representation learning. In Proc. CVPR, pp. 9729–9738. Cited by: §2.3.
  • [36] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proc. CVPR, pp. 770–778. Cited by: §4.2, §4.4, §4.4, §4.5, TABLE II, TABLE III, TABLE IV, TABLE V, TABLE VI, TABLE VII.
  • [37] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. Efros, and T. Darrell (2018) CyCADA: cycle-consistent adversarial domain adaptation. In Proc. ICML, pp. 1989–1998. Cited by: §1, §2.1, §4.1, TABLE I.
  • [38] W. Hu, T. Miyato, S. Tokui, E. Matsumoto, and M. Sugiyama (2017) Learning discrete representations via information maximizing self-augmented training. In Proc. ICML, pp. 1158–1167. Cited by: §1, §3.2.
  • [39] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proc. ICML, pp. 448–456. Cited by: §3.8.
  • [40] L. Jing and Y. Tian (2020) Self-supervised visual feature learning with deep neural networks: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §2.3.
  • [41] G. Kang, L. Jiang, Y. Yang, and A. G. Hauptmann (2019) Contrastive adaptation network for unsupervised domain adaptation. In Proc. CVPR, pp. 4893–4902. Cited by: §2.1.
  • [42] A. Krause, P. Perona, and R. G. Gomes (2010) Discriminative clustering by regularized information maximization. In Proc. NeurIPS, Cited by: §3.2.
  • [43] V. K. Kurmi and V. P. Namboodiri (2019) Looking back at labels: a class based domain adaptation technique. In Proc. IJCNN, pp. 1–8. Cited by: §2.1.
  • [44] I. Kuzborskij and F. Orabona (2013) Stability and hypothesis transfer learning. In Proc. ICML, pp. 942–950. Cited by: §1, §2.2.
  • [45] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §4.2.
  • [46] C. Lee, T. Batra, M. H. Baig, and D. Ulbricht (2019) Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proc. CVPR, pp. 10285–10295. Cited by: §4.1, TABLE I, TABLE IV.
  • [47] D. Lee (2013) Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In ICML Workshop on Challenges in Representation Learning, Cited by: §2.4, §3.3.
  • [48] S. Lee, D. Kim, N. Kim, and S. Jeong (2019) Drop to adapt: learning discriminative features for unsupervised domain adaptation. In Proc. ICCV, pp. 91–100. Cited by: §2.1.
  • [49] D. Li, Y. Yang, Y. Song, and T. M. Hospedales (2017) Deeper, broader and artier domain generalization. In Proc. ICCV, pp. 5542–5550. Cited by: §1, §4.1.
  • [50] R. Li, Q. Jiao, W. Cao, H. Wong, and S. Wu (2020) Model adaptation: unsupervised domain adaptation without source data. In Proc. CVPR, pp. 9641–9650. Cited by: §2.2.
  • [51] S. Li, C. H. Liu, Q. Lin, Q. Wen, L. Su, G. Huang, and Z. Ding (2020) Deep residual correction network for partial domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §4.1, §4.5, TABLE VII, TABLE IX.
  • [52] W. Li, F. Li, Y. Luo, and P. Wang (2020) Deep domain adaptive object detection: a survey. arXiv preprint arXiv:2002.06797. Cited by: §1.
  • [53] J. Liang, R. He, Z. Sun, and T. Tan (2018) Aggregating randomized clustering-promoting invariant projections for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (5), pp. 1027–1042. Cited by: §2.1.
  • [54] J. Liang, R. He, Z. Sun, and T. Tan (2019) Distant supervised centroid shift: a simple and efficient approach to visual domain adaptation. In Proc. CVPR, pp. 2975–2984. Cited by: §2.2, §3.3.
  • [55] J. Liang, D. Hu, and J. Feng (2020) Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In Proc. ICML, Cited by: §1, §1, §4.2.
  • [56] J. Liang, Y. Wang, D. Hu, R. He, and J. Feng (2020) A balanced and uncertainty-aware approach for partial domain adaptation. In Proc. ECCV, Cited by: §4.1, §4.5, TABLE VII.
  • [57] M. Long, Y. Cao, Z. Cao, J. Wang, and M. I. Jordan (2018) Transferable representation learning with deep adaptation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (12), pp. 3071–3085. Cited by: §2.1, §2.4.
  • [58] M. Long, Y. Cao, J. Wang, and M. Jordan (2015) Learning transferable features with deep adaptation networks. In Proc. ICML, pp. 97–105. Cited by: §1, §1, §2.1, §3.2, §4.1, TABLE II, TABLE III, TABLE IV, TABLE V, TABLE VI.
  • [59] M. Long, Z. Cao, J. Wang, and M. I. Jordan (2018) Conditional adversarial domain adaptation. In Proc. NeurIPS, pp. 1640–1650. Cited by: §2.1, §2.4, §4.1, §4.1, §4.2, §4.2, §4.2, TABLE I, TABLE II, TABLE III, TABLE IV, TABLE V.
  • [60] M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu (2013) Transfer feature learning with joint distribution adaptation. In Proc. ICCV, pp. 2200–2207. Cited by: §2.1, §3.8.
  • [61] Z. Lu, Y. Yang, X. Zhu, C. Liu, Y. Song, and T. Xiang (2020) Stochastic classifiers for unsupervised domain adaptation. In Proc. CVPR, pp. 9111–9120. Cited by: §4.1, §4.4, TABLE I, TABLE IV.
  • [62] M. Mancini, L. Porzi, S. Rota Bulò, B. Caputo, and E. Ricci (2018) Boosting domain adaptation by discovering latent domains. In Proc. CVPR, pp. 3771–3780. Cited by: §4.1, TABLE VI.
  • [63] Y. Mansour, M. Mohri, and A. Rostamizadeh (2009) Domain adaptation with multiple sources. In Proc. NeurIPS, pp. 1041–1048. Cited by: §2.2.
  • [64] R. Müller, S. Kornblith, and G. E. Hinton (2019) When does label smoothing help?. In Proc. NeurIPS, pp. 4694–4703. Cited by: §3.1.
  • [65] Z. Murez, S. Kolouri, D. Kriegman, R. Ramamoorthi, and K. Kim (2018) Image to image translation for domain adaptation. In Proc. CVPR, pp. 4500–4509. Cited by: §2.1.
  • [66] M. Noroozi and P. Favaro (2016) Unsupervised learning of visual representations by solving jigsaw puzzles. In Proc. ECCV, pp. 69–84. Cited by: §2.3.
  • [67] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang (2010) Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks 22 (2), pp. 199–210. Cited by: §2.1.
  • [68] S. J. Pan and Q. Yang (2009) A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22 (10), pp. 1345–1359. Cited by: §1, §2.1.
  • [69] P. Panareda Busto and J. Gall (2017) Open set domain adaptation. In Proc. ICCV, pp. 754–763. Cited by: §2.1.
  • [70] Z. Pei, Z. Cao, M. Long, and J. Wang (2018) Multi-adversarial domain adaptation. In Proc. AAAI, pp. 3934–3941. Cited by: §2.1.
  • [71] M. Peng, Q. Zhang, Y. Jiang, and X. Huang (2018) Cross-domain sentiment classification with target domain specific information. In Proc. ACL, pp. 2505–2513. Cited by: §1.
  • [72] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang (2019) Moment matching for multi-source domain adaptation. In Proc. ICCV, pp. 1406–1415. Cited by: §1, §1, §3.5, §4.1, §4.2, TABLE VI.
  • [73] X. Peng, Z. Huang, Y. Zhu, and K. Saenko (2020) Federated adversarial domain adaptation. In Proc. ICLR, Cited by: §2.2.
  • [74] X. Peng, B. Usman, N. Kaushik, J. Hoffman, D. Wang, and K. Saenko (2017) Visda: the visual domain adaptation challenge. arXiv preprint arXiv:1710.06924. Cited by: §4.1.
  • [75] C. Qin, L. Wang, Q. Ma, Y. Yin, H. Wang, and Y. Fu (2020) Opposite structure learning for semi-supervised domain adaptation. arXiv preprint arXiv:2002.02545. Cited by: §4.1, §4.5, TABLE VIII.
  • [76] C. Ren, P. Ge, P. Yang, and S. Yan (2020) Learning target-domain-specific classifier for partial domain adaptation. IEEE Transactions on Neural Networks and Learning Systems. Cited by: §4.1, §4.5,