Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation

08/27/2020 ∙ by Taotao Jing, et al. ∙ Indiana University 0

Unsupervised Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain. Conventional UDA concentrates on extracting domain-invariant features through deep adversarial networks. However, most of them seek to match the different domain feature distributions, without considering the task-specific decision boundaries across various classes. In this paper, we propose a novel Adversarial Dual Distinct Classifiers Network (AD^2CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries. To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment. Moreover, we naturally design two different structure classifiers to identify the unlabeled target samples over the supervision of the labeled source domain data. Such dual distinct classifiers with various architectures can capture diverse knowledge of the target data structure from different perspectives. Extensive experimental results on several cross-domain visual benchmarks prove the model's effectiveness by comparing it with other state-of-the-art UDA.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs) have made significant progress with the help of numerous well-labeled training data and achieved remarkable performance improvement on various tasks

[18, 33]. However, massive amounts of annotated training data are not always available due to the dramatically expensive data collecting and annotating costs. Domain adaptation (DA) has attracted extremely increasing attention because it focuses on a frequent and real-world issue when we have no access to massive labeled target domain training data [25, 21, 28]. The mechanism of domain adaptation is to uncover the common latent factors across the source and target domains and reduce both the marginal and conditional mismatch in terms of the feature space between domains. Following this, different domain adaptation techniques have been developed, including feature alignment and classifier adaptation [35, 14, 39].

Recent research efforts on domain adaptation have already shown promising performance via seeking an effective domain-invariant feature extractor across two domains so that the source knowledge could be adapted to facilitate the recognition task in the target domain [13, 37, 30, 16, 15]. The idea is to deploy cross-domain matching losses to guide the domain-invariant feature learning. First of all, the discrepancy loss (e.g., maximum mean discrepancy (MMD)) is one of the most widely-used strategies to measure the distribution difference across the source and target domains [1, 11]. Along this, many DA approaches explore to design a class-wise MMD by incorporating the pseudo labels of target data [14, 28]. Secondly, the adversarial loss has been successfully applied to eliminate the domain shifts on the feature or pixel level [7, 32, 20, 42], where one domain discriminator or more are trained with a feature generator in an adversarial manner. Moreover, various reconstruction penalties are proposed on target samples to obtain the target specific structures, e.g., iCAN [41]. However, most existing domain adaptation methods suffer from explicitly matching source and target domains distribution by only considering the domain-wise adaptation while ignoring the alignment of task-specific category boundaries.

Figure 1: Framework overview of our proposed model, where is the domain-invariant embedding features generator, denotes the fully-connected neural networks classifier (solid line) and means the prototypical classifier (dash line). and are explored to align the feature and prediction distribution differences across two domains and dual classifiers, respectively.

To address this issue, some recent DA works aim to consider the task-specific category-level alignment jointly [32, 20, 21]. Along this line, Saito present Maximum Classifier Discrepancy (MCD) with two task-specific classifiers to detect category boundaries and jointly align features distribution and category boundaries across domains [32]

. Following this, Lee propose Sliced Wasserstein Discrepancy (SWD) as a new probability distribution discrepancy measurement to capture the natural notion of dissimilarity between the outputs of task-specific classifiers

[20]. Later on, [42] promotes Domain-Symmetric Networks (SymNets) as well as a two-level (feature-level and category-level) domain confusion scheme to drive the learning of intermediate features to be invariant at the corresponding categories of the two domains. These method benefit from various strategies to maximize the disparity of the dual classifiers prediction results, however, considering the utterly same architecture classifiers not only limits the features distribution knowledge obtained from different perspectives, but also suffers from the risk that the two task-specific classifiers may result in the similar class-wise boundaries, especially when the imbalanced data distribution across various categories.

In this paper, we propose a novel Adversarial Dual Distinct Classifiers Network (ADCN) with two different-architecture classifiers, e.g., Neural Networks Classifier and Prototypical Classifier, to facilitate the alignment of both domain distributions and category decision boundaries (Fig. 1). To our best knowledge, it is a pioneering work to explore dual different structure classifiers in domain adaptation. The general idea is to explore adversarial training over two different architecture classifiers on the output of one domain-invariant feature generator. To sum up, we highlight the three-fold contributions of this paper as follows:

  • We exploit dual different architecture task-specific classifiers over source supervision to exploit the task-specific decision boundaries on the target domain. With different properties of dual classifiers in prediction, we have a better chance of capturing ground-truth classifier decision boundaries for the target domain.

  • We propose a novel discriminative cross-domain alignment loss and Importance Guided Optimization strategy to mitigate the cross-domain mismatching. This will facilitate the process of aligning the domain-invariant embedding features distribution across domains, and eliminate the distraction of misestimated target samples at the beginning of optimizing.

  • We adopt a discrepancy loss to maximally improve the prediction performance of dual classifiers in coupling the cross-domain label distributions, which is trained in an adversarial way with domain-invariant feature generator and dual classifiers. Thus, they can benefit from each other to boost the target learning task.

2 Related Work

Domain adaptation (DA) has been extensively studied recently, which casts a light when there are no or limited labels in target domain and shows very promising performance in different vision applications [42, 21, 26, 38, 20].

With the renaissance of deep neural networks, deep DA methods successfully embed DA into deep learning pipelines by either minimizing an appropriate distribution distance metric

[24] or leveraging adversarial technologies to generate domain-invariant representations [32, 3]. The idea behind this is to incorporate domain alignment strategies at the top layers to explicitly solve the enlarged domain discrepancy resulted from traditional deep learning models. To name a few, Long proposed Domain Adaptation Network (DAN) to incorporate multiple kernel MMD distances across domains among the last three task-specific layers [23]. Long presented a joint adaptation network (JAN) as well as a joint MMD criterion [27]

. Another strategy is to leverage generative adversarial networks (GAN)

[10] to couple the cross-domain discrepancy in an adversarial manner [7, 32, 41, 42]. Such techniques aim to train a domain discriminator to differentiate source and target samples, while the feature generator will deceive the domain discriminator, such that the domain-invariant features will be produced. Ganin proposed DANN to generate task-specific discriminative while domain-wise indiscriminative features [8]. Tzeng presented ADDA for adversarial adaptation [36].

Both discrepancy and adversarial loss based methods attempt to match the whole source and target domain distribution completely, neither of them considers the target domain data structure and task-specific decision boundaries. To address this, Saito adopted the task-specific category decision boundaries and proposes a model with two classifiers as a discriminator to detect the relationship between the source and target domain data (MCD) [32]. By maximizing the prediction results of the two classifiers, the framework is able to screen out target samples that are near the category decision boundaries and far from the source domain support. Following this, Lee extended MCD and proposed a novel Wasserstein metric to capture the natural notion of dissimilarity between the outputs of two task-specific classifiers [20]. Most recently, Li claimed that label distribution alignment is still not enough and present Joint Adversarial Domain Adaptation (JADA) to simultaneously align domain-wise and class-wise distributions across source and target in a unified adversarial learning process [21]. Unfortunately, existing works seek to maximize the prediction difference between two same architecture classifiers to explore different task-specific knowledge, limiting the divergence of category decision boundaries captured across domains.

Differently, we propose a novel framework with two different structure classifiers, which assist the model to learn more diverse data distribution patterns and less similar category decision boundaries from different perspectives. Integrating task-specific category boundaries and feature-level cross-domain adaptation, our proposed model is able to narrow the data mismatch of source and target domain in the shared domain invariant embedding space. Moreover, we explore a cross-domain discriminative distribution alignment under the sample Importance Guided Optimization strategy, which has been experimentally proven to eliminate the source and target domain shift.

3 The Proposed Method

3.1 Preliminaries and Motivation

Given a labeled source domain which contains labeled samples, as well as an unlabeled target domain of unlabeled samples. and denote the source and target domain different data distributions respectively (). and mean source and target domain identical label spaces. is the source domain ground truth label set which is accessible for training, where is the number of total categories. The goal of domain adaption is to seek a model to predict the unlabeled target data over the supervision from the source domain.

Recent domain adaptation works apply adversarial networks to generate domain invariant features of the source and target domain samples, which will make the classifiers trained only on the source domain data available on the target domain[7, 9, 41]. Most of them aim to match the distribution of source and target domain completely, without considering the task-specific decision boundaries between different categories. Most recently, the idea of dual adversarial classifiers [32, 20, 42, 21] has been explored to replace the original adversarial domain adaptation with a binary domain discriminator. However, they obtain two same-type classifiers from scratch over labeled source data. This would limit the discriminative ability in target prediction since the same-type classifiers would tend to have similar properties. Traditional neural networks classifier aims to fit the training data by achieving optimal objective value, thus the learned classifier boundaries would capture the global structure of the data to maximally separate different classes. Such a decision boundary over source supervision cannot be well adapted to target samples in different distribution. Therefore, two same-architecture neural network classifiers over source supervision are challenging to diversify the decision boundaries.

This motivates us to explore two different architecture classifiers, and thus we propose a novel adversarial dual classifiers network with two different structure classifiers, Neural Networks Classifier and Prototypical Classifier [34], which can capture various data distribution pattern and more diverse task-specific category boundaries from different perspectives, and also promote the out of source support target samples detection process. Interestingly, the prototypical classifier explores the local structure of the data since prototypes are used to assign labels based on the similarity between samples and each prototype. The competition between two different structure classifiers is more likely to diversify the decision boundaries to benefit from adversarial training with domain-invariant generator.

3.2 Adversarial Dual Distinct Classifiers Network

We first present the overall framework of our proposed adversarial dual classifier network in Fig. 1. Given the labeled source and unlabeled target domain data, the domain invariant embedding features are generated and aligned by the discriminative cross-domain alignment, then the dual classifiers, which consist of two classifiers with different architectures, will promote the task-specific decision boundaries further. is the generator used to map source and target domain data to a shared embedding feature space, in which the target samples are close to the support of the source domain data. The following two different structure classifiers, fully-connected neural network classifier and prototypical classifier , will capture diverse and various task-specific categories knowledge on target domain from different perspectives.

3.2.1 Dual Classifiers Over Source Supervision

Since and have different distributions, a domain-invariant feature generator is deployed to capture more enriched information across source and target through hierarchical structures, followed by our dual classifiers, (fully-connected neural network classifier) and (prototypical classifier). With the extracted feature from

as input, we can calculate the corresponding probability prediction with two classifiers

and as .

Specifically, is the traditional multi-layer non-linear classifier, while is defined as the similarity between target sample feature to each category prototype (i.e., class center), that is, . For each class, the prototype , where and denote the number of target samples and extracted domain invariant feature belonging to class . We apply the prediction as the predicted pseudo label to target sample to get the category prototypes .

In order to obtain task-specific discriminative features from generator , while keeping classification performance on source domain, we add the supervision from source to learn the parameters of and . Since does not contain any trainable parameters, the supervision over prediction on the source domain tends to optimize the generator only. To this end, we aim to minimize the cross-entropy loss over and predicted labels from and , defined as follows:

(1)

where is the cross-entropy loss. and are the probability outputs of classifier and , while is the ground-truth label of source sample , respectively.

3.2.2 Adversarial Dual Classifiers

The dual classifiers are capable of recognizing target domain samples close to the support of the source domain. For those target domain samples which are far from the source domain support, the two classifiers would tend to obtain different probability outputs. To detect target samples outside of the support from source supervision, we propose to measure the disagreement of the classifiers prediction results with distribution discrepancy measurement [20, 32].

Existing works exploit varying the dual classifiers by maximizing the divergence between the predictions. However, the same classifier structure with slightly different random initializations [32, 20] will weaken the ability to capture diverse task-specific knowledge and decision boundaries from different perspectives. In our model, we build two different architecture classifiers, which are more likely to capture the inconsistent information from various perspective. Thus, adversarial training would further enhance the target prediction performance, and the classifier discrepancy is defined as:

(2)

where represent the probability prediction obtained from the two classifiers for the sample respectively. denotes the discrepancy measurement function, which is able to capture distribution geometric information to calculate the discrepancy between the probability prediction distributions, and solve gradient vanishing problems occurred in adversarial learning methods.

3.2.3 Discriminative Cross-Domain Alignment

So far, our model only aligns cross-domain distributions in terms of label space, we further exploit feature distribution alignment to boost the domain-invariant feature learning. Empirical Maximum Mean Discrepancy (MMD) has been verified as a promising technique to minimize the domain-wise mean of two domains or class-wise mean with the pseudo labels of the target [25]. The domain-wise MMD to measure marginal distribution across the source and target domains is defined as [25], where is the function used to evaluate the distribution difference. Furthermore, existing works [5] also seek to explore the class-wise MMD to align conditional distribution disparity across domain:

(3)

where denotes the total number of categories, denote the generated embedding representations of source sample and target sample belonging to class .

However, conventional DA algorithms only seek to minimize the distribution difference between source and target domains when samples are from the same class. We further propose to explicitly take the information of different categories into account and measure the diff-class divergence across domains defined as:

(4)

where the diff-class divergence calculates the average distances of all different class center pairs across domains. To sum up, our discriminative cross-domain alignment is defined as .

Due to the lack of target domain labels, we explicitly assign , the prediction of , as pseudo labels to the target samples . To exploit more effective knowledge transfer iteratively, we propose an Importance Guided Optimization strategy to only consider those target samples with high prediction confidences during the cross-domain alignment since lower-confident samples would mislead the optimization. That is, only samples with are accepted to construct the cross-domain alignment , where is the probability prediction of belonging to class , and is a constant threshold. It is noteworthy that we do not impose always covering the whole label space during training, since only considering those classes with high-confident samples is prone to result in effective cross-domain alignment by avoiding too many mis-classified target samples, especially in the early training stage.

3.3 Overall Objective and Optimization

To eliminate the side effect of uncertainty on unlabeled target prediction, we also explore the entropy minimization regularization [42, 24, 26]:

(5)

where and denote the prediction of belonging to class obtained by and , respectively.

To sum up, we integrate adversarial dual classifiers training and cross-domain discriminative alignment together, and propose our overall objective function as:

(6)

where and are hyper-parameters to balance the contribution of loss terms , , respectively.

Similar to existing adversarial networks training strategy, we freeze the generator to train classifiers, then freeze the parameters of the classifiers to update the generator . It is noteworthy that only contains trainable parameters because only relies on the embedding features produced by the generator . Meanwhile, inspired by [32], in order to keep the performance of the networks on the source domain and detect target samples far from source domain support, we train our framework by three steps:

Step A. We train the feature generator and classifier only on source domain

which is the same as supervised learning tasks. Due to

does not have any trainable parameters, only parameters in and would be updated. Our model aims to detect target samples which are outside of source support from those which are close to support of source domain, keeping good ability and performance on classifying the source domain samples correctly is crucial and necessary. The optimization objective is defined as .

Step B. We need to assign unlabeled target domain samples pseudo labels by classifiers we already have. In our experiments, we explore the prediction results of to obtain pseudo labels of the target samples, which are experimentally proven to achieve better performance, and we will discuss it in the ablation analysis section. We fix the feature generator and update the classifier to maximize the distribution discrepancy between the classification results of and on the target domain, which can detect the target samples excluded by the source domain data support, and we obtain the training objective function as .

Step C. We freeze the parameters of the classifier and update generator to minimize the distribution discrepancy between the predictions of and on the target domain, through which both and classifiers will have more similar and correct prediction on target domain samples. Furthermore, together with the discriminative cross-domain alignment, the generator tends to couple the source and target domain closer but discriminative in the embedding feature space. The optimization objective is .

These three steps repeat once in each iteration in our experiments. The generator and classifier are initialized and pre-trained on source domain data.

4 Experimental Results

Method ArCl ArPr ArRw ClAr ClPr ClRw PrAr PrCl PrRw RwAr RwCl RwPr Avg.
Res-50 [12] 34.9 50.0 58.0 37.4 41.9 46.2 38.5 31.2 60.4 53.9 51.2 59.9 46.1
DAN [23] 43.6 57.0 67.9 45.8 56.5 60.4 44.0 43.6 67.7 63.1 51.5 74.3 56.3
RevGrad [7] 45.6 59.3 70.1 47.0 58.5 60.9 46.1 43.7 68.5 63.2 51.8 76.8 57.6
JAN [27] 45.9 61.2 68.9 50.4 59.7 60.0 45.8 43.4 70.3 63.9 52.4 76.8 58.3
SE [6] 48.8 61.8 72.8 54.1 63.2 65.1 50.6 49.2 72.3 66.1 55.9 78.7 61.5
DSR [2] 53.4 71.6 77.4 57.1 66.8 69.3 56.7 49.2 75.7 68.0 54.0 79.5 64.9
DWT-MEC [31] 50.3 72.1 77.0 59.6 69.3 70.2 58.3 48.1 77.3 69.3 53.6 82.0 65.6
CDAN+E [24] 50.7 70.6 76.0 57.6 70.0 70.0 57.4 50.9 77.3 70.9 56.7 81.6 65.8
MCS [22] 55.9 73.8 79.0 57.5 69.9 71.3 58.4 50.3 78.2 65.9 53.2 82.2 66.3
AFN [38] 52.0 71.7 76.3 64.2 69.9 71.9 63.7 51.4 77.1 70.9 57.1 81.5 67.3
SymNets [42] 47.7 72.9 78.5 64.2 71.3 74.2 64.2 48.8 79.5 74.5 52.6 81.6 67.6
BDG [40] 51.5 73.4 78.7 65.3 71.5 73.7 65.1 49.7 81.1 74.6 55.1 84.8 68.7
Ours 57.4 77.3 80.0 63.4 76.4 76.4 64.2 52.4 80.7 69.6 57.2 83.9 69.9
Table 1: Comparisons of Recognition Rates () of Unsupervised Domain Adaptation on Office+Home Dataset (ResNet-50).
Method Res-50 [12] DAN [23] RevGrad [7] JAN [27] MADA [29] CDAN+E [24] AFN [38] SymNets [42] BDG [40] Ours
AW 68.40.2 80.50.4 82.00.4 86.00.4 90.00.1 94.10.1 90.10.1 90.80.1 93.60.4 93.60.3
DW 96.70.1 97.10.2 96.90.2 96.70.3 97.40.1 98.60.1 98.60.2 98.80.3 99.00.1 98.90.2
WD 99.30.1 99.60.1 99.10.1 99.70.1 99.60.1 100.00.0 99.80.0 100.00.0 100.00.0 99.80.0
AD 68.90.2 78.60.2 79.70.4 85.10.4 87.80.2 92.90.2 90.70.5 93.90.5 93.60.3 95.40.3
DA 62.50.3 63.60.3 68.20.4 69.20.3 70.30.3 71.00.3 73.00.2 74.60.6 73.20.2 74.90.3
WA 60.70.3 62.80.2 67.40.5 70.70.5 66.40.3 69.30.3 70.20.3 72.50.5 72.00.1 75.00.5
Avg. 76.1 80.4 82.2 84.6 85.2 87.7 87.1 88.4 88.5 89.6
Table 2: Comparisons of Recognition Rates () of Unsupervised Domain Adaptation on Office-31 Dataset (ResNet-50).

4.1 Datasets & Experimental Setup

Office-Home [37] consists of 15,500 images from 65 categories in 4 different domains: Artistic images (Ar), Clip Art (Cl), Product (Pr), and Real-World images (Rw). In total, by choosing any two domain as one task, we can build 12 cross-domain tasks to evaluate our proposed model.

Office-31 contains 4,110 images of 3 domains: Amazon (A), Webcam (W), and DSLR (D) and each domain consists of 31 categories. We evaluate our method on 6 cross-domain tasks to testify the validation of our model.

Comparisons. We compare our proposed method with several state-of-the-art unsupervised domain adaptation models: Deep Adaptation Networks (DAN) [23], Reverse Gradient (RevGrad) [7], Joint Adaptation Networks (JAN) [27], Self-Ensembling (SE) [6], Multi-adversarial Domain Adaptation (MADA) [29], Conditional Adversarial Domain Adaptation Networks (CDAN) [24], Disentangled Semantic Representation (DSR) [2], Domain-specific Whitening Transform & Min-Entropy Consensus (DWT-MEC) [31], Minimum Centroid Shift (MCS) [22], Adaptive Feature Norm Approach (AFN) [38], Domain Symmetric Networks (SymNets) [42], Bi-Directional Generation (BDG) [40]. All our experiments follow standard unsupervised domain adaptation protocols: all labeled source domain data and labels, as well as unlabeled target domain data are used for training. All comparisons are back-boned with ResNet-50 or using ResNet-50 features [12].

Implementation Details.

We implement our model with PyTorch and adopt ResNet-50

[12]

as the backbone. Specifically, a ResNet-50 network is pre-trained on ImageNet

[4] and fine-tuned on the source domain, then applied to both source and target domain data to obtain the feature representation with dimension 2,048 without the last fully connected layer.

is a two-layer fully-connected neural network, with hidden layer output as 1,024 followed by ReLU activation function, and the dropout probability retaining is 0.5. The output embedding features

dimension is 512. is a two-layer fully-connected neural network with 512 as the input and hidden layer dimension, the output dimension is the same as the number of categories in the whole label space

. Cosine similarity is accepted as the measurement metric

in . All parameters are updated with Adam optimizer [17] and the learning rate is set as 0.001 on Office-Home and Office-31 dataset. and are pre-trained and initialized on source domain data only with the learning rate as 0.1 for 2,000 iterations. We deploy SWD distance [20] as the discrepancy measurement function , and accept -2 norm as to evaluate the distribution divergence. and are fixed as 0.1 for all tasks. is set to be 0.03. For the prototypical classifier , we initialize the class prototypes with the source domain features class centers , then update the prototypes with target domain category centroids representation after obtaining the target domain samples pseudo labels iteratively till reaching convergence or the max step (which is set as 3), and return the last step prediction. All results reported in Tables 1 and 2 are the average of three random experimental results obtained by classifier , and we will discuss the performances of and in the ablation study section.

Figure 2: Ablation experiments about various loss terms contribution on Office+Home Dataset (ResNet-50).
Figure 3: Accuracies of and on Office+Home. red and blue results are obtained with as target pseudo labels for , the others are based on as pseudo labels.

4.2 Comparison Results

Table 1 and Table 2 report the classification results on target domain data of our proposed model and other comparative methods on Office-Home and Office-31 datasets respectively. All comparison results are from their original paper or quoted from [19, 42, 40], as we adopt exactly the same settings. It is noteworthy that our proposed model outperforms state-of-the-art methods on all benchmark datasets in terms of average accuracy, and obtains the best or comparable performances to the state-of-the-art domain adaptation methods in most cases. Although the Office-Home dataset is more challenging than Office-31 due to more categories and samples, as well as significant distribution dissimilarity, our proposed model still improves the performance on most tasks, which demonstrates the efficiency and effectiveness of our proposed framework.

DAN and JAN are both MMD-based methods, which seek to eliminate the cross-domain distribution disparity and match the whole source and target domain to a shared domain-invariant feature space. DAN attempts to align feature representations from multiple layers through a multi-kernel variant of MMD. JAN aims to transfer joint distributions of multi-layers’ activation of the networks across domains. With the help of additional domain adaptation terms (e.g., MMD), DAN and JAN lead to a significant performance boost over the source-only-trained model (i.e., ResNet-50) on most adaptation tasks.

Balanced Imbalanced
Y Clock Helmet Knives Bed Couch Folder Marker Pen
74 79 72 39 40 20 20 20
60 69 53 98 64 99 71 99
75.0 71.0 52.8 53.1 67.2 25.3 18.3 51.5
73.3 69.6 49.1 55.1 68.8 28.3 21.1 53.5
Table 3: v.s. accuracies () on Office+Home Ar Cl
Method AW DW WD AD DA WA Avg.
MCD [32] 88.6 98.5 100.0 92.2 69.5 69.7 86.5
SWD [20] 90.4 98.7 100.0 94.7 70.3 70.5 87.4
Ours (same) 93.3 98.8 100 94.7 72.4 73.6 88.8
Ours 93.6 98.9 99.8 95.4 74.9 75.0 89.6
Table 4: Comparisons of Dual Classifiers Structure Influence to Recognition Rates () of Unsupervised Domain Adaptation on Office-31 Dataset (ResNet-50).

RevGrad implements adversarial networks and applies gradient reversal layer to train a domain discriminator. CDAN and MADA both exploit multiplicative interactions between feature representations and category predictions as high-order features to promote the adversarial training. SE explores the use of self-ensembling for visual domain adaptation. DSR assumes that the data generation process is controlled by the semantic latent variables and domain latent variables independently, so employs a variational auto-encoder in order to reconstruct them. MCS designs a unified framework without accessing the source domain data and iteratively assigns pseudo labels to the target samples by an alternating minimization scheme.

DWT-MEC proposes domain alignment layers with feature whitening to match source and target domain distributions and leverages the unlabeled target data by Min-Entropy Consensus loss. AFN proposes a novel Adaptive Feature Norm approach to progressively adapting the feature norms of the two domains to a large range of values. SymNets exploits a novel adversarial classifiers networks and a two-level domain confusion scheme driving the learning of categories invariant intermediate features across domains. BDG bridges source and target domain through consistent classifiers interpolating two intermediate domains.

4.3 Ablation Analysis

In this section, we analyze the contribution and influence of several important terms and hyper-parameters sensitivity in our proposed model.

Figure 4: Ten Samples from Office-Home ArCl. row denotes the ground-truth labels, row shows the mis-classified labels, while means the correctly prediction.
Figure 5: t-SNE visualization of source and target samples features before (left column) and after (right column) domain adaptation through our proposed model. (a) shows the task of ArCl from Office-Home and (b) reports the task of AW from Office-31.
Figure 6: Parameters sensitivity analysis on 4 different tasks from Office-Home dataset of (a) and (b)

First, we discuss the influence of each component in our framework. By removing one of , , and , while keeping other terms same as original ADCN, we obtain three variants ADCN w/o , ADCN w/o , and ADCN w/o . From Fig. 2, we notice that all three components contribute to improving the domain adaptation performance, while our proposed discriminative cross-domain alignment plays a more crucial role than others, i.e., discrepancy and entropy minimization loss.

Secondly, we compare the performances of and while accepting or as target domain pseudo labels for . From the results in Fig. 3, we observe that results with as pseudo labels are better than the results with in most cases. Compared to , which is trained on the source domain, is based on the target prototypes and keeps better performance even on the early training stage. Fig. 4 shows several test samples that classifies correctly while cannot handle, which emphasizes the superiority of .

Thirdly, we discuss the necessity and effectiveness of two different types of classifiers in our framework. Table 3 shows the selective target domain class-wise recognition accuracy on OfficeHome Ar Cl case produced by the two classifiers and in our proposed model, as well as the number of samples in each class from the source and target domains. From the results we notice that for the categories having sufficient well labeled source samples as well as balanced target domain samples for training, have better performance than , while for other categories with imbalanced distribution across domains and insufficient labeled source samples for training, always performs better than . The observation proves that for imbalanced dataset, and have different speciality for different categories with various cross-domain distributions. More over, we show the comparison results of MCD [32], SWD[20], and our proposed model on Office-31 dataset in Table 4. MCD and SWD are two dual classifier adversarial frameworks for domain adaptation, but using two completely same structure neural networks classifiers. We also replace the and in our proposed model with two same structure neural networks classifiers and report the results as Ours(same). It is noteworthy that our proposed model achieves the best performance on most cases as well as the average accuracy compared to other same classifier structure methods, which proves the effectiveness and necessity of applying two distinct architecture classifiers.

Fourthly, we visualize the t-SNE embeddings (Fig. 5) of feature representations generated by before and after the domain adaptation through our proposed model, in which each category is represented as a cluster and different colors denote the different domains. Before adaptation, the source and target domains are totally mismatched, while our method shows the promising ability to make inter-class separated and intra-class clustered tightly.

Finally, we analyze the sensitivity of (Fig. 6 (a))and (Fig. 6 (b)) by listing four tasks from Office-Home dataset (Ar Cl, Cl Pr, Pr Rw, Rw Ar). Specifically, we set the ranges of and from 0.001 to 0.2, and evaluate one by fixing the other one as 0.1. From the results, we notice the accuracy curves are almost flat and stable, which indicates our proposed model is not sensitive to the values of nor .

5 Conclusion

We presented a novel Adversarial Dual Distinct Classifier Networks (ADCN) for unsupervised domain adaptation to align source and target domain distribution discrepancy as well as task-specific category boundaries. Specifically, we designed two different architecture classifiers to detect target samples excluded by the source domain support by aligning the task-specific decision boundaries obtained by the two classifiers. Meanwhile, a domain-invariant feature generator was proposed to embed source and target domain data to a shared feature space under the guidance of discriminative cross-domain alignment. We evaluated our proposed model on two cross-domain visual benchmarks and obtained better performance over state-of-the-art methods, proving the effectiveness of our method.

References

  • [1] K. M. Borgwardt, A. Gretton, M. J. Rasch, H. Kriegel, B. Schölkopf, and A. J. Smola (2006) Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22 (14), pp. e49–e57. Cited by: §1.
  • [2] R. Cai, Z. Li, P. Wei, J. Qiao, K. Zhang, and Z. Hao (2019-07) Learning disentangled semantic representation for domain adaptation. In IJCAI, pp. 2060–2066. External Links: Document, Link Cited by: §4.1, Table 1.
  • [3] X. Chen, S. Wang, M. Long, and J. Wang (2019) Transferability vs. discriminability: batch spectral penalization for adversarial domain adaptation. In ICML, pp. 1081–1090. Cited by: §2.
  • [4] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, pp. 248–255. Cited by: §4.1.
  • [5] Z. Ding, S. Li, M. Shao, and Y. Fu (2018) Graph adaptive knowledge transfer for unsupervised domain adaptation. In ECCV, pp. 37–52. Cited by: §3.2.3.
  • [6] G. French, M. Mackiewicz, and M. Fisher (2018) Self-ensembling for visual domain adaptation. In ICLR, Cited by: §4.1, Table 1.
  • [7] Y. Ganin and V. Lempitsky (2015)

    Unsupervised domain adaptation by backpropagation

    .
    In ICML, pp. 1180–1189. Cited by: §1, §2, §3.1, §4.1, Table 1, Table 2.
  • [8] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky (2016) Domain-adversarial training of neural networks. JMLR 17 (1), pp. 2096–2030. Cited by: §2.
  • [9] M. Ghifary, W. B. Kleijn, M. Zhang, D. Balduzzi, and W. Li (2016) Deep reconstruction-classification networks for unsupervised domain adaptation. In ECCV, pp. 597–613. Cited by: §3.1.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In NIPS, pp. 2672–2680. Cited by: §2.
  • [11] A. Gretton, K. M. Borgwardt, M. Rasch, B. Schölkopf, and A. J. Smola (2007) A kernel method for the two-sample-problem. In NIPS, pp. 513–520. Cited by: §1.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §4.1, §4.1, Table 1, Table 2.
  • [13] S. Herath, M. Harandi, and F. Porikli (2017) Learning an invariant hilbert space for domain adaptation. In CVPR, pp. 3956–3965. Cited by: §1.
  • [14] C. Hou, Y. H. Tsai, Y. Yeh, and Y. F. Wang (2016) Unsupervised domain adaptation with label and structural consistency. TIP 25 (12), pp. 5552–5562. Cited by: §1, §1.
  • [15] H. Hsu, C. Yao, Y. Tsai, W. Hung, H. Tseng, M. Singh, and M. Yang (2020) Progressive domain adaptation for object detection. In WACV, pp. 749–757. Cited by: §1.
  • [16] J. Iqbal and M. Ali (2020) MLSL: multi-level self-supervised learning for domain adaptation with spatially independent and semantically consistent labeling. In WACV, pp. 1864–1873. Cited by: §1.
  • [17] D. Kingma and J. Ba (2014-12) Adam: a method for stochastic optimization. ICLR, pp. . Cited by: §4.1.
  • [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012)

    Imagenet classification with deep convolutional neural networks

    .
    In NIPS, pp. 1097–1105. Cited by: §1.
  • [19] V. K. Kurmi, S. Kumar, and V. P. Namboodiri (2019) Attending to discriminative certainty for domain adaptation. In CVPR, pp. 491–500. Cited by: §4.2.
  • [20] C. Lee, T. Batra, M. H. Baig, and D. Ulbricht (2019) Sliced wasserstein discrepancy for unsupervised domain adaptation. In CVPR, pp. 10285–10295. Cited by: §1, §1, §2, §2, §3.1, §3.2.2, §3.2.2, §4.1, §4.3, Table 4.
  • [21] S. Li, C. H. Liu, B. Xie, L. Su, Z. Ding, and G. Huang (2019) Joint adversarial domain adaptation. In ACM MM, pp. 729–737. Cited by: §1, §1, §2, §2, §3.1.
  • [22] J. Liang, R. He, Z. Sun, and T. Tan (2019) Distant supervised centroid shift: a simple and efficient approach to visual domain adaptation. In CVPR, Cited by: §4.1, Table 1.
  • [23] M. Long, Y. Cao, J. Wang, and M. I. Jordan (2015) Learning transferable features with deep adaptation networks. In ICML, pp. 97–105. Cited by: §2, §4.1, Table 1, Table 2.
  • [24] M. Long, Z. Cao, J. Wang, and M. I. Jordan (2018) Conditional adversarial domain adaptation. In NIPS, pp. 1640–1650. Cited by: §2, §3.3, §4.1, Table 1, Table 2.
  • [25] M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu (2013) Transfer feature learning with joint distribution adaptation. In ICCV, pp. 2200–2207. Cited by: §1, §3.2.3.
  • [26] M. Long, H. Zhu, J. Wang, and M. I. Jordan (2016) Unsupervised domain adaptation with residual transfer networks. In NIPS, pp. 136–144. Cited by: §2, §3.3.
  • [27] M. Long, H. Zhu, J. Wang, and M. I. Jordan (2017)

    Deep transfer learning with joint adaptation networks

    .
    In ICML, pp. 2208–2217. Cited by: §2, §4.1, Table 1, Table 2.
  • [28] P. Morerio, R. Volpi, R. Ragonesi, and V. Murino (2020) Generative pseudo-label refinement for unsupervised domain adaptation. In WACV, pp. 3130–3139. Cited by: §1, §1.
  • [29] Z. Pei, Z. Cao, M. Long, and J. Wang (2018) Multi-adversarial domain adaptation. In AAAI, Cited by: §4.1, Table 2.
  • [30] F. Pizzati, R. d. Charette, M. Zaccaria, and P. Cerri (2020)

    Domain bridge for unpaired image-to-image translation and unsupervised domain adaptation

    .
    In WACV, pp. 2990–2998. Cited by: §1.
  • [31] S. Roy, A. Siarohin, E. Sangineto, S. R. Bulo, N. Sebe, and E. Ricci (2019) Unsupervised domain adaptation using feature-whitening and consensus loss. In CVPR, pp. 9471–9480. Cited by: §4.1, Table 1.
  • [32] K. Saito, K. Watanabe, Y. Ushiku, and T. Harada (2018) Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, pp. 3723–3732. Cited by: §1, §1, §2, §2, §3.1, §3.2.2, §3.2.2, §3.3, §4.3, Table 4.
  • [33] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1.
  • [34] J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In NIPS, pp. 4077–4087. Cited by: §3.1.
  • [35] Y. H. Tsai, C. Hou, W. Chen, Y. Yeh, and Y. F. Wang (2016) Domain-constraint transfer coding for imbalanced unsupervised domain adaptation.. In AAAI, pp. 3597–3603. Cited by: §1.
  • [36] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell (2017) Adversarial discriminative domain adaptation. In CVPR, pp. 7167–7176. Cited by: §2.
  • [37] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan (2017) Deep hashing network for unsupervised domain adaptation. In CVPR, pp. 5018–5027. Cited by: §1, §4.1.
  • [38] R. Xu, G. Li, J. Yang, and L. Lin (2019) Larger norm more transferable: an adaptive feature norm approach for unsupervised domain adaptation. In ICCV, pp. 1426–1435. Cited by: §2, §4.1, Table 1, Table 2.
  • [39] H. Yan, Y. Ding, P. Li, Q. Wang, Y. Xu, and W. Zuo (2017) Mind the class weight bias: weighted maximum mean discrepancy for unsupervised domain adaptation. In CVPR, pp. 2272–2281. Cited by: §1.
  • [40] G. Yang, H. Xia, M. Ding, and Z. Ding (2020) Bi-directional generation for unsupervised domain adaptation.. In AAAI, pp. 6615–6622. Cited by: §4.1, §4.2, Table 1, Table 2.
  • [41] W. Zhang, W. Ouyang, W. Li, and D. Xu (2018) Collaborative and adversarial network for unsupervised domain adaptation. In CVPR, pp. 3801–3809. Cited by: §1, §2, §3.1.
  • [42] Y. Zhang, H. Tang, K. Jia, and M. Tan (2019) Domain-symmetric networks for adversarial domain adaptation. In CVPR, pp. 5031–5040. Cited by: §1, §1, §2, §2, §3.1, §3.3, §4.1, §4.2, Table 1, Table 2.