The availability of large-scale data is known to be one of the critical success factors of deep learning . As shown by Sun et al. , increasing the amount of training data almost always improves the performance of a deep model. Nevertheless, collecting large data sets for many specific real world applications is still difficult and expensive due to the labor-intensive workload. Particularly, labeled data is more difficult to collect than raw data, like the ground-truth categories for image classification.
Domain adaptation , tries to take advantage of available labeled data from a source domain to learn a model on a target domain where few (or no) labels are available. Domain adaptation is needed because the assumption of identically and independently distributed (i.i.d) data is usually not satisfied in real world applications, i.e., data in the target/deploying phase is drawn from a distribution different from that of the source training data. In this case the dataset bias is usually caused by the data collection procedure . Sometimes we might want to intentionally train the model on a different domain to help improve generalization on the target domain, e.g., training a model with synthetic data to improve the performance on real world data .
In this work we focus on the case where no label information is available for the target domain, which is often referred to as Unsupervised Domain Adaptation (UDA). There has already been much progress [16, 23, 14, 21, 9, 28]
before the use of deep models. Recent trends involve combining traditional algorithms with deep features, as well as designing novel architectures for deep domain adaptation [5, 33, 20, 22, 11, 32, 2, 13, 3]. Siamese networks  are the most commonly used basic architecture for these methods, where two encoders are used for the source and target domain data  respectively, as shown in Figure 1.
1.1 Problems and our solution
Most existing methods focus on minimizing the domain discrepancy, but overlook the importance of preserving the discriminative ability for the target domain features. Note the differences between the constraints for the source domain encoder and target domain encoder in Figure 1
: two constraints for the source domain encoder, a label classification loss, and a domain loss; while only one constraint for the target domain encoder, the domain loss. The missing classification loss constraint may cause the learned target domain features to lack discriminative ability, since these features are only optimized to match the source domain distribution. Such insufficiency comes from the lack of labels on the target domain, which prevents a direct connection between the target domain features and the label classifier.
A natural solution would be to add some additional constraints to the target domain encoder so as to encourage it to preserve important information for discriminating between different classes. Due to the unsupervised nature of the target domain, a reconstruction loss first comes to mind. However, unlike the common unsupervised learning or the recent style transfer tasks[12, 36], preserving pixel-to-pixel information contradicts the objective of learning domain invariant features. On the other hand, directly aligning the features from both domains is also not applicable, since the two domain inputs are not paired, i.e., no explicit matching between the features of two domains exists.
To learn a more meaningful representation for the target domain data, we propose a novel Parameter Reference Loss (PRL) to build a flexible connection between the source domain encoder and the target domain encoder. Furthermore, we show that PRL can improve the training stability, which solves another important problem: the contradiction of using target domain labels for model selection in UDA. A detailed discussion of why this problem matters and how PRL helps can be found in Section 3.
Another motivation for PRL is that we think the current use of learned parameters wastes resources because these parameters are often only used for initializing another model for a new domain/task. Hence we try to make the model able to benefit from the previous learned parameters even during the adaptation training phase. In fact, more efficiently using such resources plays a more important role especially for the UDA task, as a result of the absence of target domain labels.
In summary, the contributions of this work include:
We point out the problem of poor discriminative ability caused by the lack of constraint for the target domain encoder in existing work.
We clarify the contradiction of evaluation procedures for UDA methods, and propose a direction to solve this problem: stabilize the training.
We propose a solution to solve the above problems simultaneously using PRL, which can be easily combined with most existing UDA methods that are based on Siamese networks.
We show that previously learned parameters can be more useful during the training phase than simply using them for model initialization.
2 Related work
We review recent deep learning based domain adaptation methods since they are most related to our proposed method.
As the main objective of domain adaptation methods is to learn a representation that is invariant to domain change, we can categorize existing methods into two groups according to the loss function used for minimizing the domain discrepancy.
2.1 Discrepancy loss based domain adaptation
The first group of methods uses discrepancy loss like Maximum Mean Discrepancy (MMD)  to learn domain invariant features. Tzeng et al. proposed Deep Domain Confusion , one of the first domain adaptation methods based on deep neural networks. They apply the AlexNet  model to both source and target domain inputs, explicitly minimizing the discrepancy loss between the extracted features using MMD. Deep Adaptation Networks (DAN)  extends this work using multi-kernel MMD on three different layers, arguing that minimizing the discrepancy on the last layer is not sufficient to remove the domain difference caused by the early layers.
propose Central Moment Discrepancy (CMD), an explicit order-wise matching of higher order moments, to avoid computationally expensive distance and kernel matrix computations. Csurkaet al.  have done a comparative study on the discrepancy based UDA models using various deep features.
2.2 Adversarial loss based domain adaptation
The adversarial loss has been recently popularized by Generative Adversarial Networks (GANs) . Bousmalis et al.  propose to use GANs to generate target domain data conditioned on the source domain inputs. Russo et al.  adopt CycleGAN  for domain adaptation to tackle the problem caused by the unpaired inputs of both domains.
The adversarial loss can also be combined with discriminative models. ReverseGrad 
applies the adversarial loss on the features extracted by a discriminative model. The implementation of ReverseGrad uses a gradient reverse layer to compute the gradients for the encoder, which is indeed a different way to compute the adversarial loss for the generator (encoder) part.
2.3 General UDA model
Tzeng et al.  summarize the methods using Siamese networks and adversarial loss. They categorize these methods according to three design options: 1) whether the parameters are shared or not in the Siamese architecture, 2) whether the models are discriminative or generative, and 3) the choice of the adversarial loss. Adding discrepancy loss to the third option leads the above summarization to a general architecture for UDA tasks.
For methods using a single encoder (generator) for both domains, the parameter sharing mechanism helps preserve the discriminative ability learned from the source domain data, however, it also limits the flexibility of the target domain model. For methods using independent encoders, preserving the discriminative ability of target domain features is often overlooked. In the specific model designed in ADDA , the target domain encoder is initialized using the parameters from the source domain encoder. This alleviates the problem caused by the single domain related constraint on the target encoder, but it is not enough to maintain the discriminative ability for the target domain features as the training continues.
3 Stabilizing UDA
We first give a formal definition of unsupervised domain adaptation for the image classification task to facilitate the explanation of our proposed method. Most of the definition is borrowed from Pan et al. .
A domain consists of two components: a feature space with -dimensionality and a marginal distribution , where
. Specifically, the feature spaces are the same in our task (image pixels or extracted features), thus the differences between domains are caused by different marginal probability distributions.
Given a specific domain, , a task consists of two components: a label space with -cardinality and an objective predictive function . From a probabilistic viewpoint, can also be written as .
We consider two different domains, the source domain and the target domain . In domain adaptation, the label space is generally assumed to be the same for both domains, i.e., . In the image classification task, this means the possible classes for each domain are the same, and hence we use to denote the label space of both domains. The source domain dataset is denoted as , where is the image instance and is the corresponding class label for that image. The target domain dataset is denoted in a similar way, with a key difference that the labels are not available: .
The objective of unsupervised domain adaptation is to learn a model that can predict the labels for the target domain data by utilizing the source domain data and labels, with only the target domain data. In particular for the image classification task, the objective is to correctly predict the category of the given target domain image, i.e., .
3.1 Contradiction of evaluating UDA methods
Figure 2 illustrates the typical workflow for unsupervised domain adaptation on an image classification task. The first step usually involves training the model on the source domain dataset only. The second step involves adapting the trained model to the target domain. Some methods combine these two steps to simultaneously learn for classification and adaptation. Note that the difficulties of unsupervised domain adaptation tasks include not only the unavailability of directly training the model with supervised information on the target domain, but also the contradiction of the hyper-parameter tuning and model selection procedure.
By the definition of unsupervised domain adaptation, it is impossible to use the target domain labels for validation purposes or selecting hyper-parameters. The simplest solution to avoid such a contradiction is just not to do hyper-parameter tuning and model selection. However, UDA methods are generally more sensitive to hyper-parameter changes compared to supervised learning approaches. As a result, besides using the complicated reverse cross validation , the only feasible option to obtain reliable performance would be to use labeled supervision on the target domain for hyper-parameter optimization, as far as we know. Nevertheless, using target domain labels for hyper-parameter tuning biases the reported accuracy and does not accurately reflect the performance in real world tasks, where the stability of models might be more important than the possible performance gain.
As completely avoiding hyper-parameter tuning and model selection is difficult, we consider from another direction to make the hyper-parameter tuning and model selection procedure easier. We will show that we can avoid the above contradiction if we can tune the hyper-parameters without looking at the target domain labels, and stabilize the adaptation training procedure so that no large performance drop is expected. Hence we modify the existing evaluation procedure to avoid the contradictions of using target domain labels:
During the adaptation training phase, select hyper-parameters without access to the target domain labels.
Given a fixed number of training epochs, always select the latest epoch/snapshot of the trained model for the final evaluation or deployment.
To fulfill the above requirements, as well as to overcome the previous problem of lack of discriminative power, we propose the Parameter Reference Loss, which we explain in detail in the next Section.
4 Parameter reference loss
We first describe a baseline model to realize a typical unsupervised domain adaptation method using deep neural networks. Then we explain the proposed Parameter Reference Loss and its variants in detail.
4.1 Baseline model
The baseline method we used has a similar architecture to that of ADDA , where a Neural Network model is first trained on the source domain for classification using cross-entropy loss, and then the parameters of this model are used for initializing the target domain encoder having the same architecture. During the adaptation process, the source domain encoder and the classifier remain fixed while the target domain encoder is trained to produce features that are similar to the source domain features using the adversarial loss proposed in .
The reason that we use a different discrepancy loss instead of directly using exactly the same model in ADDA with adversarial loss is related to the “more reasonable” evaluation setting. Compared with adversarial loss, the discrepancy loss is less sensitive to be used as a metric for unsupervised hyper-parameter tuning when other hyper-parameters are the same. The reason that adversarial loss is not sufficient to measure the current domain discrepancy is because there are two loss terms for the generator and discriminator respectively, which influence each other. Moreover, training with the adversarial loss has issues of instability due to the complex mini-max optimization, and currently still needs much effort to tune the model to work well .
There are usually two loss terms for the source domain model, a label classification loss and a discrepancy loss, however, since the source domain encoder of the baseline model is fixed during the adaptation phase, it is actually only being optimized with the classification loss when pre-training on the source domain data. Some other models described in Section 5 do have the two loss terms for the source domain encoder during adaptation. The classification loss is defined as the cross-entropy loss:
And the domain discrepancy loss is defined as the Maximum Mean Discrepancy (MMD) loss:
where is the RKHS kernel .
4.2 Naive PRL
Figure 3 shows the diagram of the proposed method. We add an extra loss term, the parameter reference loss , on the target domain model as a regularizer. PRL is defined as the loss between the parameters of the source domain encoder denoted as , and those of the target domain model denoted as . We call it “reference loss” because we treat the parameters of the source domain as a reference, which will be utilized during the training instead of only used for initialization.
The intuition for designing this loss term is three fold: 1) we want to build a connection between the label classifier and the target domain encoder, while there is no direct connection between these two components available; 2) we want to selectively transfer the knowledge learned in the source domain through the parameters, instead of reusing all of the learned parameters like weight sharing. 3) as a result of the constraint from the reference loss, the training is expected to be more stable than independent source and target domain encoders.
The formal definition of the PRL is as follows:
where denotes the number of parameters in and , while and are corresponding parameters of the target encoder and source encoder.
The reason to choose loss instead of loss is that the property of loss makes the connection between the two domain models sparse, which can be seen as selection of keeping the parameters. These connections allow the discriminative ability learned from the label classifier to transfer to the target domain features. On the other hand, the loss also allows for relatively large variations in the other parameters. In this sense, it still has enough flexibility for the model to learn domain invariant features.
Combined with the previously defined MMD loss, the objective function to optimize for the target domain encoder is:
4.3 Variants of PRL
In the naive PRL setting, we add a new loss term to the target domain encoder. Since the reference parameters (parameters of the source domain encoder) are fixed during the adaptation, as the training continues, the MMD loss is decreasing and the relative weight of the parameter reference loss is increased. As a result, during the later phase of adaptation training, the PRL plays the leading role and the influence from the MMD loss becomes smaller. On one hand, such a property makes the training quite stable; on the other hand, it also tends to prevent the MMD loss (representing the domain discrepancy) from further decreasing in the later training phase. To solve this problem, we further propose several variants of PRL in the remaining part of this section.
Simultaneous PRL This variant of PRL enables the learning of the source domain encoder as well during the adaptation phase. Hence the parameter changes are more flexible than the Naive PRL. The objective function for the source domain encoder is composed of classification loss, MMD loss and PRL:
Warm-up PRL This variant of PRL disables the learning of the source domain encoder at the beginning of adaptation. After the MMD loss is decreased to a small value and stops further descreasing, the learning of the source domain encoder is enabled again, until the end of the adaptation phase. This modification from the Simultaneous PRL is intended to prevent the unstable training behavior during the early phase of adaptation.
In-turn PRL This variant of PRL repeatedly disables the learning of the source domain encoder for epochs, and then enables the learning for another epochs. The intuition of this strategy is that disabling the learning updates on the source encoder means that the target domain encoder can have a reference constraint to avoid unstable behavior and unintentional dramatic change, while enabling the learning updates on the source encoder means that the source domain model can also use the current target domain model as a reference, allowing a progressive but gradual change of the parameters for both domains.
In this section, we first describe the domain adaptation datasets that we use for evaluation, and then we compare the different variants of PRL and baseline models on these datasets.
To sufficiently evaluate the proposed method with other state-of-the-art methods, we adopt the widely used DA dataset Office . To further evaluate the method on more challenging domains, we also try the method on a relatively new dataset: LandmarkDA . The details of these datasets are described below.
Office-31 This is a classic domain adaptation dataset with three different domains: Amazon, DSLR and Webcam, with 31 classes for each domain. Among the three domains, DSLR and Webcam have a very similar data distribution, thus adaptation on these two domains is easier than the other combinations. We evaluate the overall accuracy for all available configurations (6-direction domain adaptation).
LandmarkDA This is a very new dataset for visual domain adaptation. It also includes three different domains, photos, paintings and drawings with 25 classes in total. The differences between these domains are much larger than the above datasets, hence it is useful to evaluate the adaptation method in domains with more diversity.
5.2 Experimental setup
For all of the datasets, we construct the domain adaptation task with one source domain and one target domain for every possible combination. The performance of the model is evaluated by the overall classification accuracy on the target domain. The Office-31 and LandmarkDA datasets share the same experimental setting including the base model architecture.
There are many design options for an unsupervised domain adaptation model. On one hand, it provides us more probability to improve the model; on the other hand, it makes the comparison of different methods more complicated. To make a fair comparison of different methods, we insist on using the same design options for all the baselines and proposed methods except for the key feature of the methods. The following settings will be used for all methods compared in the experiments.
Base architecture The study of Csurka et al.  clearly shows that different deep neural networks (DNNs) have large performance variations even when using the same UDA methods. Since the focus of our work is not to improve the architecture of the DNNs, we select the very basic and most widely used AlexNet 
as the base model for all experiments. Similar to many existing works, we also adopt the AlexNet model pre-trained on ImageNet
to accelerate the initial supervised learning phase on the source domain. We are aware that freezing certain layers when fine-tuning the pre-trained model on the new tasks/domains might help improve the performance on some datasets, however, as there are many options for selecting which layers to freeze, we choose to avoid this extra variance by simply fine-tuning all of the parameters of the base model. In fact, the idea of freezing layers does not contradict the PRL, since we can still easily apply the PRL on the layers that are not frozen.
Implementation details To use the pre-trained AlexNet model on the domain adaptation datasets, we replace the final fully-connected (FC) layer of the original model with a new randomly initialized FC layer suitable for the number of classes on the datasets, e.g., a FC layer with 31-dimensional outputs for the Office-31 dataset. The first baseline model is simply fine-tuning the pre-trained model on the source domain, and we name the encoder part of this model . All other baselines and variants of PRL are based on this model, and the classifier part of this model is fixed for all adaptation procedures.
The other two baselines are , which uses a single encoder for both domains, and , which uses independent encoders for the source and target domains respectively. All these encoders are initialized with the parameters of . These baselines are actually the unified versions of existing methods like DDC  and ADDA , using the same settings for simple comparison.
Hyper-parameters The Gaussian kernel width for the MMD loss is set to 50000 for all methods and domains. This value was obtained by grid search without accessing the target domain labels. Note that we only need to find a value such that the MMD loss continuously decreases (in an unsupervised manner). We used the fixed optimizer with a learning rate of 0.0001 and a weight decay of 0.00002. These particular values can be chosen based on the performance on the source domain, where labels are available. We used a mini-batch size of 256 for training the and a size of 128 for adaptation.
PRL variants The main difference of PRL variants from the baseline models is the parameter reference loss term. There is one hyper-parameter related to this loss, namely the reference weight. However, this hyper-parameter can also be easily selected by observing the change of MMD loss and the PRL. Using a larger value at the beginning usually does not harm the performance, since it prevents the parameters to change significantly from the original model. Then if we observe that the MMD loss is not decreasing, we can choose a smaller reference weight to allow the discrepancy loss to be minimized. In our experiments, the reference weight is set to 10 and 100 for the Office dataset and LandmarkDA dataset respectively.
Table 1 shows the evaluation results on the Office-31 dataset using the proposed evaluation procedure, i.e., without using the target domain labels for hyper-parameter tuning and model selection.
Baseline models almost performs better than on all adaptation directions, while performs worse than the on all adaptation configurations. The results indicate that too much flexibility in the double stream encoders can even harm the performance.
versus The results show that loss performs much better than the loss. Though more evidence is needed, these results suggest that selectively fixing some parameters while allowing other parameters to vary relatively largely might be better than allowing many arbitrary changes on all parameters.
Variants of PRL The simultaneous PRL does not work so well in general due to the large flexibility during the early training phase, though it is still generally better than . On domains that are already highly similar before the adaptation (DSLR and Webcam), the performance of simultaneous PRL is good, but the small differences are not enough to prove the superiority over other methods. Warm-up and in-turn PRL perform better than all other candidates, of which in-turn PRL is more stable and has slightly better performance on the Office dataset. Figure 5 demonstrates the superior stability of PRL during the adaptation training phase. When comparing the variants of PRL to , we have observed that when the source domain has a relatively large scale data (A2D and A2W), the performance of PRL is significantly better than that of and other baselines, which is evidence that PRL can utilize the knowledge learned from the source domain more effectively.
|Ph Pa||Ph Dr||Pa Ph||Pa Dr||Dr Ph||Dr Pa||Avg|
|69.9 (70.3)||64.4 (65.2)||81.1 (82.5)||70.9 (71.5)||74.1 (76.0)||67.5 (68.6)||71.3 (72.3)|
|48.7 (54.1)||35.6 (43.3)||60.1 (74.5)||42.6 (52.2)||43.4 (59.2)||36.4 (54.8)||44.5 (56.4)|
|warm-up||70.1 (70.5)||65.6 (66.0)||81.8 (82.4)||70.6 (70.8)||76.2 (77.5)||68.3 (68.9)||72.1 (72.7)|
|in-turn||69.6 (70.0)||63.9 (64.5)||82.2 (82.5)||69.2 (69.7)||75.6 (77.4)||67.8 (68.9)||71.4 (72.2)|
Discriminative ability As shown in Figure 4, when and in-turn PRL have a very similar and small MMD loss, there is a large margin of performance gap between these two models on A2D and A2W adaptation tasks. These results support our claim that existing methods lack an adequate way to preserve the discriminative ability of the target domain features. In contrast, PRL is able to generate better discriminative features by making use of learned parameters more efficiently. The results also show that the performance of (parameter sharing) depends on the similarity between the source domain and the target domain (the performance is better on W2D and D2W tasks), however, fully sharing the parameters may not be a good idea because not all of the knowledge learned from the source domain is applicable to the target domain. The advantage of using loss to achieve selective knowledge transfer results in the superior performance of PRL on domains with more diversity.
Influence of evaluation procedures Now that we have seen the results obtained using our proposed evaluation procedure, it is interesting to see how the results compare to those obtained by traditional evaluation procedures. Table 2 lists the evaluation results obtained by using the target domain labels for model selection. We see that the baseline methods are significantly influenced by the evaluation methods. In contrast, we observe that the PRL has more stable performance during training, and hence is expected to be more suitable to real world applications in which no target domain labels are available.
Results on LandmarkDA The PRL variants have slightly better performance and stability compared to , and is significantly better than on the LandmarkDA dataset, as shown in Table 3. The results are generally consistent with those on the Office dataset. Besides, even the averaged accuracy () from the latest epoch (without model selection) achieves superior performance to state-of-the-art methods (), as reported in , which also uses AlexNet to extract deep features and MMD to minimize domain discrepancy.
We observe that existing Siamese network-based domain adaptation methods have limited ability to produce target domain features that are able to retain discriminative ability. We then propose a novel parameter reference loss that encourages the parameters of the target domain encoder to partially remain close to those of the source domain encoder, which has a direct connection to the classifier. This allows a more flexible and selective use of parameters learned from the source domain during the domain adaptation phase compared to full parameter sharing. Our experiments show that even the naive approach of using the pre-trained encoder parameters as a fixed reference during domain adaptation can improve the adaptation performance as well as the training stability. These results are in agreement with our hypothesis that only using the learned encoder parameters for initializing and fine-tuning the target encoder on a new domain/task is an inefficient use of resources.
In addition, we argue that existing UDA evaluation procedures can be contradictory to the requirements that such models are expected to meet in real-world usage. We therefore propose to use a simple but more reasonable evaluation procedure for hyper-parameter tuning and model selection without access to target domain labels. We argue that the key requirement is to make the UDA training more stable, which is actually more important in real applications. Our experimental results indicate that even in such a strict situation, our proposed method still manages to achieve a stable and superior performance than the other existing baselines.
-  K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor, K. Konolige, et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. arXiv preprint arXiv:1709.07857, 2017.
K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan.
Unsupervised pixel-level domain adaptation with generative
Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 95–104, 2017.
-  K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan. Domain separation networks. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 343–351, 2016.
-  J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah. Signature verification using a “siamese” time delay neural network. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 737–744, 1994.
S. Chopra, S. Balakrishnan, and R. Gopalan.
Dlid: Deep learning for domain adaptation by interpolating between domains.In
Proc. International Conference on Machine Learning (ICML) workshop on challenges in representation learning, 2013.
-  G. Csurka. A comprehensive survey on domain adaptation for visual applications. In G. Csurka, editor, Domain Adaptation in Computer Vision Applications, pages 1–35. 2017.
-  G. Csurka, F. Baradel, B. Chidlovskii, and S. Clinchant. Discrepancy-based networks for unsupervised domain adaptation: A comparative study. In Proc. International Conference on Computer Vision (ICCV), pages 2630–2636, 2017.
-  J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Proc. International Conference on Machine Learning (ICML), pages 647–655, 2014.
-  B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In Proc. International Conference on Computer Vision (ICCV), pages 2960–2967, 2013.
Y. Ganin and V. Lempitsky.
Unsupervised domain adaptation by backpropagation.In Proc. International Conference on Machine Learning (ICML), pages 1180–1189, 2015.
-  Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016.
-  L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.
-  M. Ghifary, W. B. Kleijn, M. Zhang, D. Balduzzi, and W. Li. Deep reconstruction-classification networks for unsupervised domain adaptation. In Proc. European Conference on Computer Vision (ECCV), pages 597–613, 2016.
-  B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2066–2073, 2012.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 2672–2680, 2014.
-  R. Gopalan, R. Li, and R. Chellappa. Domain adaptation for object recognition: An unsupervised approach. In Proc. International Conference on Computer Vision (ICCV), pages 999–1006, 2011.
-  A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723–773, 2012.
A. Krizhevsky, I. Sutskever, and G. E. Hinton.
Imagenet classification with deep convolutional neural networks.In Proc. Advances in Neural Information Processing Systems (NIPS), pages 1097–1105, 2012.
-  Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
-  M. Long, Y. Cao, J. Wang, and M. Jordan. Learning transferable features with deep adaptation networks. In Proc. International Conference on Machine Learning (ICML), pages 97–105, 2015.
M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu.
Transfer feature learning with joint distribution adaptation.In Proc. International Conference on Computer Vision (ICCV), pages 2200–2207, 2013.
M. Long, J. Wang, and M. I. Jordan.
Deep transfer learning with joint adaptation networks.In Proc. International Conference on Machine Learning (ICML), pages 2208–2217, 2017.
-  S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199–210, 2011.
-  S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, 2010.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
-  P. Russo, F. M. Carlucci, T. Tommasi, and B. Caputo. From source to target and back: symmetric bi-directional adaptive gan. arXiv preprint arXiv:1705.08824, 2017.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 2234–2242, 2016.
B. Sun, J. Feng, and K. Saenko.
Return of frustratingly easy domain adaptation.
Proc. AAAI Conference on Artificial Intelligence, pages 2058–2065, 2016.
-  B. Sun and K. Saenko. Deep coral: Correlation alignment for deep domain adaptation. In Proc. European Conference on Computer Vision (ECCV) Workshops, pages 443–450, 2016.
-  C. Sun, A. Shrivastava, S. Singh, and A. Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proc. International Conference on Computer Vision (ICCV), pages 843–852, 2017.
-  A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1521–1528, 2011.
-  E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2962–2971, 2017.
-  E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
-  W. Zellinger, T. Grubinger, E. Lughofer, T. Natschläger, and S. Saminger-Platz. Central moment discrepancy (cmd) for domain-invariant representation learning. In Proc. International Conference on Learning Representations (ICLR), 2017.
-  E. Zhong, W. Fan, Q. Yang, O. Verscheure, and J. Ren. Cross validation framework to choose amongst models and datasets for transfer learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD), pages 547–562, 2010.
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros.
Unpaired image-to-image translation using cycle-consistent adversarial networks.In Proc. International Conference on Computer Vision (ICCV), pages 2223–2232, 2017.
Appendix A Apply PRL to existing methods
|A D||A W||W A||W D||D A||D W||Avg|
We have been focusing on comparing Parameter Reference Loss (PRL) with different baselines (weight sharing or independent encoders) under the same base model architecture and design options in the main paper. Here we instead show an example of applying PRL to existing unsupervised domain adaptation (UDA) methods, and compare the performance with several state-of-the-art methods based on the discrepancy loss.
We select one of the most basic UDA method based on deep neural networks, Deep Domain Confusion (DDC) , to be combined with PRL. Different from the model used in the main paper, we add an extra adaptation layer as a bottleneck layer after the layer of AlexNet . The adaptation layer is a fully-connected layer with 256-dimension outputs, following . The only difference between our model and DDC is the use of PRL instead of weight sharing between the source and target domain encoders.
To facilitate direct comparisons between our models and the existing methods, we tried to improve the model trained only on the source domain. By using techniques like freezing layers during the source domain training phase, we managed to obtain a model much better than the one used for comparing different baselines and PRL variants, however, its performance is still weaker than the one reported in previous work.
For all experiments of PRL variants, the training procedure is exactly the same as described in the main paper. Besides, we have modified some hyper-parameters according to the change of features caused by the extra adaptation layer. Specifically, we use the kernel width 1000 for MMD, and reference weight 100 for PRL in the following experiments. All these hyper-parameters are selected without access to the target domain labels.
Table 4 shows the experimental results on the Office dataset. The first four columns are from Deep CORAL . Note that the implementation of our base model performs worse than the one used in previous work ( vs ). The middle columns of results are obtained by selecting the latest snapshot of the model, while the bottom columns of results are from the best snapshot during the training phase, i.e., using model selection with the target domain labels.
Latest models We first compare the latest models of different methods. The results are consistent with the previous experiments: in-turn and warm-up PRL perform better than single-encoder model (DDC), especially on the A D and A W tasks.
Comparison with existing methods We can clearly see that warm-up PRL outperforms DDC even with a weaker CNN base model ( vs ). Considering the difference of the CNN base model, the improvements in averaged accuracy is () vs (). Again, the advantages of PRL is especially strong on the A D and A W tasks, where PRL also outperforms DAN (single layer multi-kernel version) and is comparable to Deep CORAL. Such results may support the hypothesis that PRL can take advantage of knowledge learned from the source domain more efficiently, particularly when the source domain has a larger amount of data. (Number of images, Amazon: 2817, DSLR: 498, Webcam: 795).
Best models Obviously the best models have better performance compared to the latest models. What is important is to see the differences caused by the model selection method (with or without access to the target domain labels). We found that PRL variants are in general more stable than the single-encoder model (DDC) regarding to the model selection changes. These models also achieve state-of-the-art performance on several tasks, however, we consider it not proper to be used for evaluating the method. Hence we report the results here only for reference.
a.2 A general technique for UDA
The main objective of conducting above experiments is to show the potential of PRL to improve existing UDA methods. Due to the simplicity of PRL, it can be easily applied to almost all Siamese network-based models (e.g., DAN, Deep CORAL, etc.) with minor effort on modifying existing code. In fact, PRL can be used as a basic technique like weight sharing, to provide another kind of flexible regularization. We are also interested in utilizing PRL in tasks other than domain adaptation, and would like to investigate more on efficient use of learned parameters.