Curriculum Manager for Source Selection in Multi-Source Domain Adaptation

07/02/2020 ∙ by Luyu Yang, et al. ∙ 0

The performance of Multi-Source Unsupervised Domain Adaptation depends significantly on the effectiveness of transfer from labeled source domain samples. In this paper, we proposed an adversarial agent that learns a dynamic curriculum for source samples, called Curriculum Manager for Source Selection (CMSS). The Curriculum Manager, an independent network module, constantly updates the curriculum during training, and iteratively learns which domains or samples are best suited for aligning to the target. The intuition behind this is to force the Curriculum Manager to constantly re-measure the transferability of latent domains over time to adversarially raise the error rate of the domain discriminator. CMSS does not require any knowledge of the domain labels, yet it outperforms other methods on four well-known benchmarks by significant margins. We also provide interpretable results that shed light on the proposed method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Training deep neural networks requires datasets with rich annotations that are often time-consuming to obtain. Previous proposals to mitigate this issue have ranged from unsupervised 

[21, 18, 42, 29, 8, 30], self-supervised [35, 17, 36, 41], to low shot learning [28, 33, 37, 44]. Unsupervised Domain Adaptation (UDA), when first introduced in [15], sheds precious insights on how adversarial training can be utilized to get around the problem of expensive manual annotations. UDA aims to preserve the performance on an unlabeled dataset (target) using a model trained on a label-rich dataset (source) by making optimal use of the learned representations from the source.

Intuitively, one would expect that having more labeled samples in the source domain will be beneficial. However, having more labeled samples does not equal better transfer, since the source will inadvertently encompass a larger variety of domains. While the goal is to learn a common representation for both source and target in such a Multi-Source Unsupervised Domain Adaptation (MS-UDA) setting, enforcing each source domain distribution to exactly match the target may increase the training difficulty, and generate ambiguous representations near the decision boundary potentially resulting in negative transfer. Moreover, for practical purposes, we would expect the data source to be largely unconstrained, whereby neither the number of domains or domain labels are known. A good example here would be datasets collected from the Internet where images come from unknown but potentially a massive set of users.

To address the MS-UDA problem, we propose an adversarial agent that learns a dynamic curriculum [4] for multiple source domains, named Curriculum Manager for Source Selection (CMSS). More specifically, a constantly updated curriculum during training learns which domains or samples are best suited for aligning to the target distribution. The CMSS is an independent module from the feature network and is trained by maximizing the error of discriminator in order to weigh the gradient reversal back to the feature network. In our proposed adversarial interplay with the discriminator, the Curriculum Manager is forced to constantly re-measure the transferability of latent domains across time to achieve a higher error of the discriminator. Such a procedure of weighing the source data is modulated over the entire training. In effect, the latent domains with different transferability to the target distribution will gradually converge to different levels of importance without any need for additional domain partitioning prior or clustering.

We attribute the following contributions to this work:

  • We propose a novel adversarial method during training towards the MS-UDA problem. Our method does not assume any knowledge of the domain labels or the number of domains.

  • Our method achieves state-of-the-art in extensive experiments conducted on four well-known benchmarks, including the large-scale DomainNet ( 0.6 million images).

  • We obtain interpretable results that show how CMSS is in effect a form of curriculum learning that has great effect on MS-UDA when compared to the prior art. This positively differentiates our approach from previous state-of-the-art.

Figure 1: Illustration of CMSS during training. All training samples are passed through the feature network . CMSS prefers samples with better transferability to match the target, and re-measure the transferability at each iteration to keep up with the discriminator. At the end of training after the majority of samples are aligned, the CMSS weights tend to be similar among source samples.

2 Related Work

UDA is an actively studied area of research in machine learning and computer vision. Since the seminal contribution of Ben-David

et al. [2, 1], several techniques have been proposed for learning representations invariant to domain shift [23, 11, 25, 10, 45]. In this section, we review some recent methods that are most related to our work.

Figure 2: Architecture comparison of left: DANN [15], middle: IWAN [43], and right: proposed method. Red dotted lines indicate backward passes. (: feature extractor,

: classifier,

: domain discriminator, GRL: gradient reversal layer, CM: Curriculum Manager, : Eq.1 domain loss, : Eq.4.0.1 weighted domain loss)

Multi-Source Unsupervised Domain Adaptation (MS-UDA) assumes that the source training examples are inherently multi-modal. The source domains contain labeled samples while the target domain contains unlabeled samples [22, 32, 27, 15, 46]. In [32]

, adaptation was performed by aligning the moments of feature distributions between each source-target pair. Deep Cocktail Network (DCTN)

[40] considered the more realistic case of existence of category shift in addition to the domain shift, and proposes a -way domain adversarial classifier and category classifier to generate a combined representation for the target.

Because domain labels are hard to obtain in the real world datasets, latent domain discovery [27] – a technique for alleviating the need for explicit domain label annotation has many practical applications. Xiong et al. [39] proposed to use square-loss mutual information based clustering with category distribution prior to infer the domain assignment for images. Mancini et al. [27] used a domain prediction branch to guide domain discovery using multiple batch-norm layers.

Domain-Adversarial Training has been widely used [9, 7, 31] since Domain-Adversarial Neural Network (DANN) [15] was proposed. The core idea is to train a discriminator network to discriminate source features from target, and train the feature network to fool the discriminator. Zhao et al. [46] first proposed to generalize DANN to the multi-source setting, and provides theoretical insights on the multi-domain adversarial bounds. Maximum Classifier Discrepancy (MCD) [33] is another powerful [32, 19, 38, 24] technique for performing adaptation in an adversarial manner using two classifiers. The method first updates the classifiers to maximize the discrepancy between the classifiers’ prediction on target samples, followed by minimizing the discrepancy while updating the feature generator.

Domain Selection and Weighting: Some previous methods that employed sample selection and sample weighing techniques for domain adaptation include [13, 14, 12]. Duan et al. [14] proposed using a domain selection machine by leveraging a large number of loosely labeled web images from different sources. The authors of [14] adopted a set of base classifiers to predict labels for the target domain as well as a domain-dependent regularizer based on smoothness assumption. Bhatt et al. [5] proposed to adapt iteratively by selecting the best sources that learn shared representations faster. Chen et al. [9]

used a hand-crafted re-weighting vector so that the source domain label distribution is similar to the unknown target label distribution. Mancini

et al. [26] modeled the domain dependency using a graph and utilizes auxiliary metadata for predictive domain adaptation. Zhang et al. [43]

employed an extra domain classifier that gives the probability of a sample coming from the source domain. The higher the confidence is from such an extra classifier, the more likely it can be discriminated from the target domain, in which case the importance of the said sample is reduced accordingly.

Curriculum for Domain Adaptation aims at an adaptive strategy over time in order to improve the effectiveness of domain transfer. The curriculum can be hand-crafted or learned. Shu et. al [34] designed the curriculum by combining the classification loss and discriminator’s loss as a weighting strategy to eliminate the corrupted samples in the source domain. Another work with similar motivation is [8], in which Chen et. al proposed to use per-category prototype to measure the prediction confidence of target samples. A manually designed threshold is utilized to make a binary decision in selecting partial target samples for further alignment. Kurmi et. al [20]

used a curriculum-based dropout discriminator to simulate the gradual increase of sample variance.

3 Preliminaries

3.0.1 Task Formulation:

In multi-source unsupervised domain adaptation (MS-UDA), we are given an input dataset that contains samples from multiple domains. In this paper, we focus on classification problems, with the set of labels , where is the number of classes. Each sample has an associated domain label, , where is the number of source domains. In this work, we assume source domain label information is not known a priori, i.e., number of source domains or source domain label per sample is not known. In addition, given an unlabeled target dataset , the goal of MS-UDA is to train models using multiple source domains () and the target domain (), and improve performance on the target test set.

3.0.2 Domain-Adversarial training:

First, we discuss the domain-adversarial training formulation from [15] that is the basis from which we extend to MS-UDA. The core idea of domain-adversarial training is to minimize the distributional distance between source and target feature distributions posed as an adversarial game. The model has a feature extractor, a classifier, and a domain discriminator. The classifier takes in feature from the feature extractor and classifies it in classes. The discriminator is optimized to discriminate source features from target. The feature network, on the other hand, is trained to fool the discriminator while at the same time achieve good classification accuracy.

More formally, let

denote the feature extraction network,

denote the classifier, and denote the domain discriminator. Here, , and are the parameters associated with the feature extractor, classifier, and domain discriminator respectively. The model is trained using the following objective function:

(1)

is is the cross-entropy loss in source domain (with

being the one-hot encoding of the label

), and

is the discriminator loss that discriminates source samples from the target. Note that both these loss functions use samples from all source domains.

In principle, if domain labels are available, there are two possible choices for the domain discriminator: (1) domain discriminators can be trained, each one discriminating one of the source domains from the target [15], or (2) a domain discriminator can be trained as a -way classifier to classify input samples as either one of the source domains or target [46]. However, in our setup, domain labels are unknown and, therefore, these formulations can not be used.

4 CMSS: Curriculum Manager for Source Selection

For the source domain that is inherently multi-modal, our goal is to learn a dynamic curriculum for selecting the best-suited samples for aligning to the target feature distribution. At the beginning of training, the Curriculum Manager is expected to prefer samples with higher transferability for aligning with the target, i.e., source samples which have similar feature distributions to the target sample. Once the feature distributions of these samples are aligned, our Curriculum Manager is expected to prioritize the next round of source samples for alignment. As the training progresses, the Curriculum Manager can learn to focus on different aspects of the feature distribution as a proxy for better transferability. Since our approach learns a curriculum to prefer samples from different source domains, we refer to it is Curriculum Manager for Source Selection (CMSS).

Our approach builds on the domain-adversarial training framework (described in 3). In this framework, our hypothesis is that source samples that are hard for the domain discriminator to separate from the target samples are likely the ones that have similar feature distributions. Our CMSS leverages this and uses the discriminator loss to find source samples that should be aligned first. The preference for source samples is represented as per-sample weights predicted by CMSS. Since our approach is based on domain-adversarial training, weighing using these weights will lead to the discriminator encouraging the feature network to bring the distributions of higher weighted source samples closer to the target samples. This signal between the discriminator and feature extractor is achieved using the gradient reversal layer (see [15] for details).

Therefore, our proposed CMSS is trained to predict weights for source samples at each iteration, which maximizes the error of the domain discriminator. Due to this adversarial interplay with the discriminator, the CMSS is forced to re-estimate the preference of source samples across training to keep up with the improving domain discriminator. The feature extractor,

, is optimized to learn features that are both good for classification and confuse the discriminator. To avoid any influence from the classification task in the curriculum design, our CMSS also has an independent feature extractor module that learns to predict weights per-sample given the source images and domain discriminator loss.

4.0.1 Training CMSS:

The CMSS weight for every sample in the source domain, , is given by . We represent this weighted distribution as . The CMSS network is represented by with parameters . Given a batch of samples, , we first pass these samples to to obtain an array of scores that are normalized using softmax function to obtain the resulting weight vector. During training, the CMSS optimization objective can be written as

(2)

With the source sample weights generated by CMSS, the loss function for domain discriminator can be written as

(3)

The overall optimization objective can be written as

(4)

where is the Cross-Entropy loss for source classification and is the weighted domain discriminator loss from Eq. (4.0.1), with weights obtained by optimizing Eq. (2).

is the hyperparameter in the gradient reversal layer. We follow 

[15] and set based on the following annealing schedule: , where is the current number of iterations divided by the total. is set to in all experiments as in [15]. Details of training are provided in Algorithm  1.

1:: Total number of training iterations
2:: For computing for
3: and : Batch size for source and target domains
4:Shuffle the source domain samples
5:for  in  do
6:     Compute according to
7:     Sample a training batch from source domains and from target domain
8:     Update by
9:     Update by
10:     Update by
11:end for
Algorithm 1 Training CMSS (Curriculum Manager for Source Selection)

4.1 CMSS: Theoretical Insights

We first state the classic generalization bound for domain adaptation [3, 6]. Let be a hypothesis space of -dimension . For a given hypothsis class , define the symmetric difference operator as . Let , denote the source and target distributions respectively, and , denote the empirical distribution induced by sample of size drawn from , respectively. Let () denote the true risk on source (target) domain, and () denote the empirical risk on source (target) domain. Then, following Theorem 1 of [6], with probability of at least , ,

(5)

where is a constant

Here, is the optimal combined risk (source + target risk) that can be achieved by hypothesis in . Let , be the samples in the empirical distributions and respectively. Then, and . The empirical source risk can be written as

Now consider a CMSS re-weighted source distribution , with . For to be a valid probability mass function, and . Note that and share the same samples, and only differ in weights. The generalization bound for this re-weighted distribution can be written as

Since the bound holds for all weight arrays in a simplex, we can minimize the objective over to get a tighter bound.

(6)

The first term is the weighted risk, and the second term is the weighted symmetric divergence which can be realized using our weighted adversarial loss. Note that when , we get the original bound (5). Hence, the original bound is in the feasible set of this optimization.

4.1.1 Relaxations.

In practice, deep neural networks are used to optimize the bounds presented above. Since the bound (6) is minimized over the weight vector , one trivial solution is to assign non-zero weights to only a few source samples. In this case, a neural network can overfit to these source samples, which could result in low training risk and low domain divergence. To avoid this trivial case, we present two relaxations:

  • We use the unweighted loss for the source risk (first term in the bound (6)).

  • For the divergence term, instead of minimizing over all the samples, we optimize only over mini-batches. Hence, for every mini-batch, there is at least one which is non-zero. Additionally, we make weights a function of input, i.e., , which is realized using a neural network. This will smooth the predictions of , and make the weight network produce a soft-selection over source samples based on correlation with the target.

Note that the network discussed in the previous section satisfies these criteria.

5 Experimental Results

In this section, we perform an extensive evaluation of the proposed method on the following tasks: digit classification(MNIST, MNIST-M, SVHN, Synthetic Digits, USPS), image recognition on the large-scale DomainNet dataset (clipart, infograph, paiting, quickdraw, real, sketch), PACS[22] (art, cartoon, photo and sketch) and Office-Caltech10 (Amazon, Caltech, Dslr, Webcam). We compare our method with the following contemporary approaches: Domain Adversarial Neural Network (DANN) [15], Multi-Domain Adversarial Neural Network (MDAN)[46] and two state-of-the-art discrepancy-based approaches: Maximum Classifier Discrepancy (MCD) [33] and Moment Matching for Multi-Source (SDA) [32]. We follow the protocol used in other multi-source domain adaptation works [27, 32], where each domain is selected as the target domain while the rest of domains are used as source domains. For Source Only and DANN

experiments, all source domains are shuffled and treated as one domain. To guarantee fairness of comparison, we used the same model architectures, batch size and data pre-processing routines for all compared approaches. All our experiments are implemented in PyTorch.

5.1 Experiments on Digit Recognition

Following DCTN [40] and SDA [32], we sample images from training subset and from testing subset of MNIST, MNIST-M, SVHN and Synthetic Digits. The entire USPS is used since it contains only images in total.

In all the experiments, the feature extractor is composed of three layers and two layers. The entire network is trained from scratch with batch size equals

. For each experiment, we run the same setting five times and report the mean and standard deviation. (See

Appendix for more experiment details and analyses.) The results are shown in Table 1. The proposed method achieves an 90.8% average accuracy, outperforming other baselines by a large margin ( improvement on the previous state-of-the-art approach).

Models Avg
Source Only 92.3 0.91 63.7 0.83 71.5 0.75 83.4 0.79 90.7 0.54 80.3 0.76
DANN[15] 97.9 0.83 70.8 0.94 68.5 0.85 87.3 0.68 93.4 0.79 83.6 0.82
MDAN[46] 97.2 0.98 75.7 0.83 82.2 0.82 85.2 0.58 93.3 0.48 86.7 0.74
MCD[33] 96.2 0.81 72.5 0.67 78.8 0.78 87.4 0.65 95.3 0.74 86.1 0.64
SDA[32] 98.4 0.68 72.8 1.13 81.3 0.86 89.5 0.56 96.1 0.81 87.6 0.75
CMSS 99.0 0.08 75.3 0.57 88.4 0.54 93.7 0.21 97.7 0.13 90.8 0.31
Table 1: Results on Digits classification. The proposed CMSS achieves 90.8% accuracy. Comparisons with MCD and are reprinted from [32]. All experiments are based on a 3--layer backbone trained from scratch. (mt, mm, sv, sy, up: MNIST, MNIST-M, SVHN, Synthetic Digits, UPSP)

5.2 Experiments on DomainNet

Next, we evaluate our method on DomainNet [32] – a large-scale benchmark dataset used for multi-domain adaptation. The DomainNet dataset contains samples from domains: Clipart, Infograph, Painting, Quickdraw, Real and Sketch. Each domain has 345 categories, and the dataset has 0.6 million

images in total, which is the largest existing domain adaptation dataset. We use ResNet-101 pretrained on ImageNet as the feature extractor for in all our experiments. For CMSS, we use a ResNet-18 pretrained on ImageNet. The batch size is fixed to

. We conduct experiments over random runs, and report mean and standard deviation over the 5 runs.

The results are shown in Table 2. CMSS achieves 46.5% average accuracy, outperforming other baselines by a large margin. We also note that our approach achieves the best performance in each experimental setting. It is also worth mentioning that in the experiment when the target domain is Quickdraw (q), our approach is the only one that outperforms Source Only baseline, while all other compared approaches result in negative transfer (lower performance than the source-only model). This is since quickdraw has a significant domain shift compared to all other domains. This shows that our approach can effectively alleviate negative transfer even in such challenging set-up.

Models Avg
Source Only* 47.60.52 13.00.41 38.10.45 13.30.39 51.90.85 33.70.54 32.90.54
Source Only 52.10.51 23.40.28 47.70.96 13.00.72 60.70.32 46.50.56 40.60.56
DANN[15] 60.60.42 25.80.34 50.40.51 7.70.68 62.00.66 51.70.19 43.00.46
MDAN[46] 60.30.41 25.00.43 50.30.36 8.21.92 61.50.46 51.30.58 42.80.69
MCD[33] 54.30.64 22.10.70 45.70.63 7.60.49 58.40.65 43.50.57 38.50.61
SDA[32] 58.60.53 26.00.89 52.30.55 6.30.58 62.70.51 49.50.76 42.60.64
CMSS 64.20.18 28.00.20 53.60.39 16.00.12 63.40.21 53.80.35 46.50.24
Table 2: Results on the DomainNet dataset. CMSS achieves average accuracy. When the target domain is quickdraw , CMSS is the only one that outperforms Source Only which indicates negative transfer has been alleviated. Source Only * is re-printed from [32], Source Only is our implemented results. All experiments are based on ResNet-101 pre-trained on ImageNet. (: clipart, : infograph, : painting, : quickdraw, : real, : sketch)

5.3 Experiments on PACS

PACS [22] is another popular benchmark for multi-source domain adaptation. It contains 4 domains: art, cartoon, photo and sketch. Images of 7 categories are collected for each domain. There are images in total. For all experiments, we used ResNet-18 pretrained on ImageNet as the feature extractor following [27]. For the Curriculum Manager, we use the same architecture as the feature extractor. Batch size of is used. We conduct experiments over random runs, and report mean and standard deviation over the runs. The results are shown in Table 4 (: art, : cartoon, : painting, : sketch.). CMSS achieves the state-of-the-art average accuracy of 89.5%. On the most challenging sketch (s) domain, we obtain 82.0%, outperforming other baselines by a large margin.

Models Avg
Source Only 74.90.88 72.10.75 94.50.58 64.71.53 76.60.93
DANN[15] 81.91.13 77.51.26 91.81.21 74.61.03 81.51.16
MDAN[46] 79.10.36 76.00.73 91.40.85 72.00.80 79.60.69
WBN[27] 89.90.28 89.70.56 97.40.84 58.01.51 83.80.80
MCD[33] 88.71.01 88.91.53 96.40.42 73.93.94 87.01.73
SDA[32] 89.30.42 89.91.00 97.30.31 76.72.86 88.31.15
CMSS 88.60.36 90.40.80 96.90.27 82.00.59 89.50.50
Table 4: Results on Office-Caltech10
Models Avg
Source Only 99.0 98.3 87.8 86.1 92.8
DANN[15] 99.3 98.2 89.7 94.8 95.5
MDAN[46] 98.9 98.6 91.8 95.4 96.1
MCD[33] 99.5 99.1 91.5 92.1 95.6
SDA[32] 99.5 99.2 92.2 94.5 96.4
CMSS 99.6 99.3 93.7 96.0 97.2
Table 3: Results on PACS

5.4 Experiments on Office-Caltech10

The office-Caltech10 [16] dataset has 10 object categories from 4 different domains: Amazon, Caltech, DSLR, and Webcam. For all the experiments, we use the same architecture (ResNet-101 pretrained on ImageNet) used in [32]. The experimental results are shown in Table 4 (A: Amazon, C: Caltech, D: Dslr, W: Webcam). CMSS achieves state-of-the-art average accuracy of 97.2.

5.5 Comparison with other re-weighting methods

In this experiment, we compare CMSS with other weighing schemes proposed in the literature. We use IWAN [43] for this purpose. IWAN, originally proposed for partial domain adaption, reweights the samples in adversarial training using outputs of discriminator as sample weights (Refer to Figure 2). CMSS, however, computes sample weights using a separate network updated using an adversarial game. We adapt IWAN for multi-source setup and compare it against our approach. The results are shown in Table 5 (abbreviations of domains same as Table 2). IWAN obtained average accuracy which is close to performance obtained using DANN with combined source domains. For further analysis, we plot how sample weights estimated by both approaches (plotted as mean variance) change as training progresses in Figure 3. We observe that CMSS selects weights with larger variance which demonstrates its sample selection ability, while IWAN has weights all close to (in which case, it becomes similar to DANN). This illustrates the superiority of our sample selection method. More discussions on sample selection can be found in Section 6.2. CMSS also achieves a faster and more stable convergence in test accuracy compared to DANN [15] where we assume a single source domain (Figure 6), which further supports the effectiveness of the learnt curriculum.

Models Avg
DANN[15] 60.6 25.8 50.4 7.7 62.0 51.7 43.0
IWAN[43] 59.1 25.2 49.7 12.9 60.4 51.4 43.1
CMSS 64.2 28.0 53.6 16.0 63.4 53.8 46.5
Figure 3: Mean/var of weights over time.
Table 5: Comparing re-weighting methods
Figure 4: Interpretation results of the sample selection on DomainNet dataset using the proposed method. In each plot, one domain is selected as the target. In each setting, predictions of CMSS are computed for each sample of the source domains. The bars indicate how many of these samples have weight prediction larger than a manually chosen threshold, with each bar denoting a single source domain. Maximum number of samples are highlighted in red. Best viewed in color
Figure 5: Ranked source samples according to learnt weights (class “Clock” of DomainNet dataset). LHS: Examples of unlabeled target domain Clipart and the Top/Bottom Ranked 50 samples of the source domain composed of Infograph, Painting, Quickdraw, Real and Sketch. RHS: Examples of unlabeled target domain Quickdraw and the Ranked samples of source domain composed of Clipart, Infograph, Painting, Real and Sketch. Weights are obtained at inference time using CMSS trained after epochs.

6 Interpretations

In this section, we are interested in understanding and visualizing the source selection ability of our approach. We conduct two sets of experiments: (i) visualizations of the source selection curriculum over time, and (ii) comparison of our selection mechanism with other sample re-weighting methods.

6.1 Visualizations of source selection

6.1.1 Domain Preference

We first investigate if CMSS indeed exhibits domain preference over the course of training as claimed. For this experiment, we randomly select training samples from each source domain in DomainNet and obtain the raw weights (before softmax) generated by CMSS. Then, we calculate the number of samples in each domain passing a manually selected threshold . We use the number of samples passing this threshold in each domain to indicate the domain preference level. The larger the fraction, more weights are given to samples from the domains, hence, higher the domain preference. Figure 4 shows the visualization of domain preference for each target domain. We picked different in each experiment for more precise observation. We observe that CMSS does display domain preference (Clipart - Painting, Infograph - Sketch, Real - Clipart

) that is in fact correlated with the visual similarity of the domains. An exception is

Quickdraw, where no domain preference is observed. We argue that this is because Quickdraw has significant domain shift compared to all other domains, hence no specific domain is preferred. However, CMSS still produces better performance on Quickdraw. While there is no domain preference for Quickdraw, there is within-domain sample preference as illustrated in Figure 5. That is, our approach chooses samples within a domain that are structurally more similar to the target domain of interest. Hence, just visualizing aggregate domain preference does not depict the complete picture. We will present sample-wise visualization in the next section.

Figure 6: Test accuracy after the model is trained for epochs. Comparison between CMSS and DANN using source domains combined as one.
Figure 7: t-SNE visualization of features at six different epochs during training. The shaded region is the migrated range of target features. Dateset used is PACS with sketch as the target domain.

6.1.2 Beyond Domain Preference

In addition to domain preference, we are interested in taking a closer look at sample-wise source selection. To do this, we first obtain the weights generated by CMSS for all source samples and rank the source images according to their weights. An example is shown in Figure 5. For better understanding, we visualize samples belonging to a fixed category (“Clock” in Figure 5). See Appendix for more visualizations.

In Figure 5, we find that notion of similarity discovered by CMSS is different for different domains. When the target domain is Clipart (left panel of Figure 5), source samples with colors and cartoonish shapes are ranked at the top, while samples with white background and simplistic shapes are ranked at the bottom. When the target is Quickdraw (right panel of Figure 5), one would think that CMSS will simply be selecting images with similar white background. Instead, it prefers samples which are structurally similar to the regular rounded clock shape (as most samples in Quickdraw are similar to these). It thus appears that structural similarity is favored in Quickdraw, whereas color information is preferred in Clipart. This provides support that CMSS selects samples according to ease of alignment to the target distribution, which is automatically discovered per domain. We argue that this property of CMSS has an advantage over approaches such as MDAN [46] which simply weighs manually partitioned domains.

6.2 Selection Over Time

In this section, we discuss how source selection varies as training progresses. In Figure 3, we plot mean and variance of weights (output of Curriculum Manager) over training iterations. We observe that the variance is high initially, which indicates many samples have weights away from the mean value of . Samples with higher weights are preferred, while those with low weights contribute less to the alignment. In the later stages, the variance is very low which indicates most of the weights are close to . Hence, our approach gradually adapts to increasingly many source samples over time, naturally learning a curriculum for adaptation. In Figure 7, we plot a t-SNE visualization of features at different epochs. We observe that the target domain sketch (red) first adapts to Art (yellow), and then gradually aligns with Cartoon (green) and Photo (blue).

7 Conclusion

In this paper, we proposed Curriculum Manager for Source Selection (CMSS) that learns a curriculum for Multi-Source Unsupervised Domain Adaptation. A curriculum is learnt that iteratively favors source samples that align better with the target distribution over the entire training. The curriculum learning is achieved by an adversarial interplay with the discriminator, and achieves state-of-the-art on four benchmark datasets. We also shed light on the inner workings of CMSS, and we hope that will pave the way for further advances to be made in this research area.

Acknowledgement

This work was supported by Facebook AI Research and DARPA via ARO contract number W911NF2020009.

References

  • [1] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan (2010) A theory of learning from different domains. Machine learning 79 (1-2), pp. 151–175. Cited by: §2.
  • [2] S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira (2007) Analysis of representations for domain adaptation. In Advances in neural information processing systems, pp. 137–144. Cited by: §2.
  • [3] S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira (2007) Analysis of representations for domain adaptation. In Advances in Neural Information Processing Systems 19, B. Schölkopf, J. C. Platt, and T. Hoffman (Eds.), pp. 137–144. Cited by: §4.1.
  • [4] Y. Bengio, J. Louradour, R. Collobert, and J. Weston (2009) Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, New York, NY, USA, pp. 41–48. External Links: ISBN 9781605585161 Cited by: §1.
  • [5] H. S. Bhatt, A. Rajkumar, and S. Roy (2016) Multi-source iterative adaptation for cross-domain classification.. In IJCAI, pp. 3691–3697. Cited by: §2.
  • [6] J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman (2008) Learning bounds for domain adaptation. In Advances in Neural Information Processing Systems 20, J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis (Eds.), pp. 129–136. Cited by: §4.1.
  • [7] Z. Cao, M. Long, J. Wang, and M. I. Jordan (2018)

    Partial transfer learning with selective adversarial networks

    .
    In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 2724–2732. Cited by: §2.
  • [8] C. Chen, W. Xie, W. Huang, Y. Rong, X. Ding, Y. Huang, T. Xu, and J. Huang (2019) Progressive feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 627–636. Cited by: §1, §2.
  • [9] Q. Chen, Y. Liu, Z. Wang, I. Wassell, and K. Chetty (2018) Re-weighted adversarial adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7976–7985. Cited by: §2, §2.
  • [10] Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool (2018) Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3339–3348. Cited by: §2.
  • [11] Z. Ding, N. M. Nasrabadi, and Y. Fu (2018) Semi-supervised deep domain adaptation via coupled neural networks. IEEE Transactions on Image Processing 27 (11), pp. 5214–5224. Cited by: §2.
  • [12] L. Duan, I. W. Tsang, D. Xu, and T. Chua (2009) Domain adaptation from multiple sources via auxiliary classifiers. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 289–296. Cited by: §2.
  • [13] L. Duan, D. Xu, and S. Chang (2012) Exploiting web images for event recognition in consumer videos: a multiple source domain adaptation approach. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1338–1345. Cited by: §2.
  • [14] L. Duan, D. Xu, and I. W. Tsang (2012) Domain adaptation from multiple sources: a domain-dependent regularization approach. IEEE Transactions on Neural Networks and Learning Systems 23 (3), pp. 504–518. Cited by: §2.
  • [15] Y. Ganin and V. Lempitsky (2014)

    Unsupervised domain adaptation by backpropagation

    .
    arXiv preprint arXiv:1409.7495. Cited by: §1, Figure 2, §2, §2, §3.0.2, §3.0.2, §4.0.1, §4, §5.5, Table 1, Table 2, Table 4, Table 5, §5.
  • [16] B. Gong, Y. Shi, F. Sha, and K. Grauman (2012) Geodesic flow kernel for unsupervised domain adaptation. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2066–2073. Cited by: §5.4.
  • [17] R. Jeong, Y. Aytar, D. Khosid, Y. Zhou, J. Kay, T. Lampe, K. Bousmalis, and F. Nori (2019) Self-supervised sim-to-real adaptation for visual robotic manipulation. arXiv preprint arXiv:1910.09470. Cited by: §1.
  • [18] G. Kang, L. Jiang, Y. Yang, and A. G. Hauptmann (2019) Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4893–4902. Cited by: §1.
  • [19] A. Kumar, P. Sattigeri, K. Wadhawan, L. Karlinsky, R. Feris, B. Freeman, and G. Wornell (2018) Co-regularized alignment for unsupervised domain adaptation. In Advances in Neural Information Processing Systems, pp. 9345–9356. Cited by: §2.
  • [20] V. K. Kurmi, V. Bajaj, V. K. Subramanian, and V. P. Namboodiri (2019) Curriculum based dropout discriminator for domain adaptation. arXiv preprint arXiv:1907.10628. Cited by: §2.
  • [21] S. Lee, D. Kim, N. Kim, and S. Jeong (2019) Drop to adapt: learning discriminative features for unsupervised domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 91–100. Cited by: §1.
  • [22] D. Li, Y. Yang, Y. Song, and T. M. Hospedales (2017) Deeper, broader and artier domain generalization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5542–5550. Cited by: §2, §5.3, §5.
  • [23] Y. Li, X. Tian, M. Gong, Y. Liu, T. Liu, K. Zhang, and D. Tao (2018) Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European Conference on Computer Vision, pp. 624–639. Cited by: §2.
  • [24] H. Liu, M. Long, J. Wang, and M. Jordan (2019) Transferable adversarial training: a general approach to adapting deep classifiers. In International Conference on Machine Learning, pp. 4013–4022. Cited by: §2.
  • [25] Y. Luo, L. Zheng, T. Guan, J. Yu, and Y. Yang (2019) Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2507–2516. Cited by: §2.
  • [26] M. Mancini, S. R. Bulò, B. Caputo, and E. Ricci (2019) Adagraph: unifying predictive and continuous domain adaptation through graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6568–6577. Cited by: §2.
  • [27] M. Mancini, L. Porzi, S. Rota Bulò, B. Caputo, and E. Ricci (2018) Boosting domain adaptation by discovering latent domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3771–3780. Cited by: §2, §2, §5.3, Table 4, §5.
  • [28] S. Motiian, Q. Jones, S. Iranmanesh, and G. Doretto (2017) Few-shot adversarial domain adaptation. In Advances in Neural Information Processing Systems, pp. 6670–6680. Cited by: §1.
  • [29] C. Ouyang, K. Kamnitsas, C. Biffi, J. Duan, and D. Rueckert (2019) Data efficient unsupervised domain adaptation for cross-modality image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 669–677. Cited by: §1.
  • [30] Y. Pan, T. Yao, Y. Li, Y. Wang, C. Ngo, and T. Mei (2019) Transferrable prototypical networks for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2239–2247. Cited by: §1.
  • [31] Z. Pei, Z. Cao, M. Long, and J. Wang (2018) Multi-adversarial domain adaptation. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    ,
    Cited by: §2.
  • [32] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang (2018) Moment matching for multi-source domain adaptation. arXiv preprint arXiv:1812.01754. Cited by: §2, §2, §5.1, §5.2, §5.4, Table 1, Table 2, Table 4, §5.
  • [33] K. Saito, K. Watanabe, Y. Ushiku, and T. Harada (2018) Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3723–3732. Cited by: §1, §2, Table 1, Table 2, Table 4, §5.
  • [34] Y. Shu, Z. Cao, M. Long, and J. Wang (2019) Transferable curriculum for weakly-supervised domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 4951–4958. Cited by: §2.
  • [35] Y. Sun, E. Tzeng, T. Darrell, and A. A. Efros (2019) Unsupervised domain adaptation through self-supervision. arXiv preprint arXiv:1909.11825. Cited by: §1.
  • [36] A. Valada, R. Mohan, and W. Burgard (2019) Self-supervised model adaptation for multimodal semantic segmentation. International Journal of Computer Vision, pp. 1–47. Cited by: §1.
  • [37] T. Wang, X. Zhang, L. Yuan, and J. Feng (2019) Few-shot adaptive faster r-cnn. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7173–7182. Cited by: §1.
  • [38] Z. Wu, X. Wang, J. E. Gonzalez, T. Goldstein, and L. S. Davis (2019) ACE: adapting to changing environments for semantic segmentation. arXiv preprint arXiv:1904.06268. Cited by: §2.
  • [39] C. Xiong, S. McCloskey, S. Hsieh, and J. J. Corso (2014) Latent domains modeling for visual domain adaptation. In Twenty-Eighth AAAI Conference on Artificial Intelligence, Cited by: §2.
  • [40] R. Xu, Z. Chen, W. Zuo, J. Yan, and L. Lin (2018) Deep cocktail network: multi-source unsupervised domain adaptation with category shift. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3964–3973. Cited by: §2, §5.1.
  • [41] J. S. Yoon, T. Shiratori, S. Yu, and H. S. Park (2019) Self-supervised adaptation of high-fidelity face models for monocular performance tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4601–4609. Cited by: §1.
  • [42] K. You, X. Wang, M. Long, and M. Jordan (2019) Towards accurate model selection in deep unsupervised domain adaptation. In International Conference on Machine Learning, pp. 7124–7133. Cited by: §1.
  • [43] J. Zhang, Z. Ding, W. Li, and P. Ogunbona (2018) Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8156–8164. Cited by: Figure 2, §2, §5.5, Table 5.
  • [44] J. Zhang, Z. Chen, J. Huang, L. Lin, and D. Zhang (2019) Few-shot structured domain adaptation for virtual-to-real scene parsing. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 0–0. Cited by: §1.
  • [45] H. Zhao, R. T. d. Combes, K. Zhang, and G. J. Gordon (2019) On learning invariant representation for domain adaptation. arXiv preprint arXiv:1901.09453. Cited by: §2.
  • [46] H. Zhao, S. Zhang, G. Wu, J. M. Moura, J. P. Costeira, and G. J. Gordon (2018) Adversarial multiple source domain adaptation. In Advances in Neural Information Processing Systems, pp. 8559–8570. Cited by: §2, §2, §3.0.2, Table 1, Table 2, Table 4, §5, §6.1.2.