Deep Neural Networks (DNNs) have achieved promising performances in various multi-media applications with the help of sufficient well-labeled training data, which is not always available and dramatically expensive to collect and annotate(Simonyan and Zisserman, 2014; Yan et al., 2016; He et al., 2016; Ding et al., 2018). Domain Adaptation (DA) has made significant progress in such a common and real-world situation when massive amounts of well-labeled training data of the target domain are not accessible (Jiang et al., 2017; Li et al., 2019b; Zhuo et al., 2017; Wang et al., 2018; Yao et al., 2019; Li et al., 2019a; Xia and Ding, 2020). The philosophy of domain adaptation is transferring the knowledge of a related well-labeled source domain to the unlabeled target domain by aligning the marginal and conditional distributions while mitigating the domain distribution disparity across domains. Towards this goal, a plenty of domain adaptation (DA) techniques have been successfully applied in various multimedia tasks such as multimodal learning (Rasiwasia et al., 2010; Shu et al., 2015; Wang et al., ), visual object recognition (Li et al., 2019b, 2020b), and text categorization (Blitzer et al., 2006; Dai et al., 2008).
Recent domain adaptation efforts seek to capture general domain-invariant but task-discriminative feature representations in shared feature space for two domains through matching the cross-domain distribution alignment schemes. Discrepancy loss is one of the most commonly used strategies to evaluate the cross-domain distribution difference, e.g., maximum mean discrepancy (MMD) (Borgwardt et al., 2006)
. A bunch of domain adaptation efforts design various MMD loss functions to align the source and target domain marginal and conditional distribution by incorporating pseudo labels of the target domain(Long et al., 2015, 2016). Besides, the adversarial loss is another sufficiently explored scheme to eliminate the domain shifts by training one or more domain discriminator against the feature generator in an adversarial manner (Bousmalis et al., 2017; Tzeng et al., 2015, 2017; Hoffman et al., 2017; Luo et al., 2017). Moreover, latest DA research works jointly consider both the domain-wise alignment as well as the task-specific category-level alignment (Saito et al., 2018b; Lee et al., 2019a; Zhang et al., 2019), or propose various reconstruction penalties to obtain target specific structures (Zhang et al., 2018b). However, all conventional domain adaptation solutions assume that the source and target domain have identical label space, which is not always satisfied in real life (Saenko et al., 2010).
Partial domain adaptation (PDA) focuses on such a common and challenging situation when the source domain label space subsumes the target domain label space (Cao et al., 2018b, a; Zhang et al., 2018a)
. Along this line, Cao et al. propose the Partial Adversarial Domain Adaptation (PADA) to simultaneously eliminates the negative transfer by down-weighing the source domain outlier categories during training the classifier and domain adversary and promotes the cross-domain distribution alignment in the shared label space(Cao et al., 2018b). Later on, Cao et al. extend PADA to Selective Adversarial Networks (SAN) which incorporates instance-level and category-level weighting mechanism with multi-discriminator domain adversarial networks to not just down-weigh the source outlier classes, but also align each target sample to several most relevant classes and promote positive transfer for each instance (Cao et al., 2018a). On the other hand, Zhang et al. present Importance Weighted Adversarial Nets (IWAN) to alleviate the distraction of the source domain outlier classes by assigning importance score of each source sample obtained from the two domain classifier strategy (Zhang et al., 2018a). Similar to this, Cao et al. propose Example Transfer Network (ETN) to quantify the transferability of the source samples and evaluate each point contribution to both the classifier and domain discriminator (Cao et al., 2019). Unfortunately, even most of the previously mentioned PDA efforts explore the re-weighing mechanism to reduce the outlier source categories negative transfer, adapting the cross-domain distribution in the whole source and target domain data space and label space is still vulnerable to the outlier source categories and misclassified samples. Besides, most existing PDA methods suffer from explicitly matching source and target domains distribution by only considering the domain-wise adaptation while ignoring the alignment of class-wise distribution.
In this paper, we propose an Adaptively-Accumulated Knowledge Transfer scheme (AKT) to manage partial domain adaptation challenges by simultaneously promoting positive transfer in the shared label space while alleviating negative transfer caused by the outlier source categories. The general idea is through gradually filtering out confident task-relevant target samples and corresponding categories to optimize both domain-wise distribution adaptation and class-wise distribution alignment. To sum up, the contributions of this paper are highlighted as follows:
First of all, we propose an adaptively-accumulated knowledge transfer strategy to iteratively weigh and filter out confident task-relevant target samples and corresponding categories under the guidance of the source domain data for effective cross-domain alignment.
Secondly, we explore two different types of task-specific classifiers to capture and transfer intrinsic distribution knowledge across domains from various perspectives.
Thirdly, we propose a cross-domain alignment loss function which is able to align the class-level discrimination across domains, and compact the sample-level distribution within the same class.
2. Related Work
2.1. Domain Adaptation
The cross-domain data distribution discrepancy, known as domain shift, is the main challenge of domain adaptation. Plenty of works exploits the potential of deep neural networks to capture explanatory attributes and domain-invariant features in recent years, which is conducive to mitigating domain shift while transferring underlying knowledge across domains in domain adaptation tasks (Bengio et al., 2013; Donahue et al., 2014; Yosinski et al., 2014)
. Compared to traditional machine learning based domain adaptation solutions, introducing deep architecture into domain adaptation promotes the generalization of frameworks dramatically(Hoffman et al., 2014; Oquab et al., 2014). Some researchers integrate high-order statistical properties of different domains into a unified framework, such as maximum mean discrepancy (MMD), to align the data distribution across domains, which successfully eliminate domain shift and achieve promising classification performance on the target domain (Long et al., 2015, 2016). By virtue of generative adversarial techniques, some works involve a domain discriminator into the game to distinguish which domain the sample belonging to while optimizing the generator and discriminator in an adversarial manner (Ganin et al., 2016; Tzeng et al., 2015; Li et al., 2019b). Moreover, the latest works rethink the domain adaptation problem from various perspectives and propose dual-classifiers-based frameworks that seek to align not only domain-wise data distributions but also classifier-class-specific boundaries (Saito et al., 2018a; Lee et al., 2019b; Zhang et al., 2019).
2.2. Partial Domain Adaptation
Unfortunately, realistic application scenarios hardly satisfy the standard domain adaptation assumption that the source and target domains share identical label space. A more common situation is the source domain subsumes the target domain label space, which means the source domain includes samples from more different categories except for the shared ones with the target domain. Such a novel challenge, named as partial domain adaptation (PDA), attracts substantial attention in transfer learning and brings out many inspiring works on this topic. Selective Adversarial Network (SAN) explores multiple adversarial networks to weight and select out the outlier categories source samples and down their transferring weights (Cao et al., 2018a). Partial Adversarial Domain Adaptation (PADA) extends SAN and pays more attention to class-level transferability weighting on the source classifier (Cao et al., 2018b)
. Similarly, Importance Weighted Adversarial Nets (IWAN) considers the sigmoid output of an auxiliary domain classifier as the indicator to measure the probability of each source sample comes from the target domain(Zhang et al., 2018a). Example Transfer Network (ETN) further explains the discriminative information as the transferability quantification of the source domain samples, through which the irrelevant examples from outlier categories are down-weighted for both the task-specific classifier and domain discriminator (Cao et al., 2019). All the pioneering efforts achieve impressive performance improvements over conventional domain adaptation approaches on PDA tasks.
However, although most existing PDA solutions seek to mitigate the negative transfer caused by outlier source classes by re-weighting samples’ importance to reduce the distraction, they still train and predict the entire source domain label space, which dilutes the contribution of discriminative information within the shared categories across domains. Besides, some of them regard the prediction of the target samples as pseudo labels to align cross-domain conditional distribution, which would involve severe classification errors and mislead the optimizing direction of the model, especially at the initial stage of training when the classifier cannot handle the differently distributed unlabeled target domain samples.
Unlike previous efforts, our proposed Adaptively Accumulated Knowledge Transfer framework (AKT) can simultaneously align the data distribution inter-class center-wise and intra-class sample-wise, both within and across domains. Exploiting prototype classifier and adaptively optimization strategy makes for eliminating the distraction triggered by the misclassified target domain samples.
3. The Proposed Method
3.1. Preliminaries and Motivation
Given source domain with labels, and unlabeled target domain , where is a -dimension source/target sample and is the known label corresponding to the source sample. and are drawn from distribution and respectively, while . Since the source domain label space subsumes the target domain label space , i.e., , partial domain adaptation attempts to predict unlabeled target samples with the relevant source knowledge out of the entire well-labeled source domain.
To eliminate the influence of irrelevant source categories, existing partial domain adaptation models mainly design a weighting strategy to select the relevant source categories for effective cross-domain alignment with discrepancy loss (Zhang et al., 2018a) or adversarial loss (Cao et al., 2018a). To mitigate the conditional distribution mismatch across two domains, most of them rely on the pseudo labels of target samples assigned from a source-supervised neural network classifier. Due to the cross-domain distribution gap, such pseudo labels are not reliable, which would further hurt the cross-domain alignment, since the neural network classifier fits perfectly for the source distribution while not for target distribution.
To address these issues, we consider not only to detect the irrelevant source categories to eliminate the negative influence but also select the most confident target samples during cross-domain alignment. Thus, our proposed model can adaptively select out a subset of the target domain samples that are highly affiliated to the source domain and corresponding categories to align across domains. Moreover, the prototype classifier (Snell et al., 2017) is adopted to annotate the target samples via source prototypes, since it can capture the intrinsic structure and semantic knowledge across source and target domain. Exploring the dual classifier architecture consisting of two different types classifiers, prototype classifier and multilayer perceptron classifer, extends the ability of the proposed model to reveal the task-specific knowledge from various perspectives.
3.2. Adaptively-Accumulated Knowledge Transfer
The proposed framework, which is shwon as Figure 1, consists of three modules: 1) domain-invariant feature generator , 2) fully-connected multilayer perceptron classifier , 3) prototype classifier . takes the source and target data as input and maps them into a shared embedding space. The extracted features are denoted as for the source and target domain respectively.
3.2.1. Building Diverse Source-Supervised Classifiers
With as input, and can assign labels from different perspectives, denoted as and , respectively. is a fully-connected multilayer perceptron classifier, while the prototype classifier measures the similarity between every target sample to each source domain class center followed by a Softmax function to assign probability prediction, that is, , where is the similarity measurement function followed by Softmax.
In order to maintain the performance on the source domain, we keep the supervision from source and minimize the cross-entropy loss over ground truth and predicted labels from as:
As is a parameter-free classifier, so we don’t need to add supervision over source domain data to .
3.2.2. Adaptively Accumulating Cross-Domain Knowledge
Empirical Maximum Mean Discrepancy (MMD) has been verified as a promising technique to minimize the cross-domain marginal distribution difference (Long et al., 2017). Some very recent works also adopt pseudo labels for target domain data in order to match the conditional distribution across-domain, by minimizing the distance between the source and target domain class-wise embeddings from the same category (Tzeng et al., 2017). However, aligning all the target categories with the predicted label information is not effective since pseudo labels are not reliable especially under the PDA settings.
To alleviate the negative impact of misclassified pseudo labels to target domain samples, as well as the outlier categories from source domain label space, we propose the Adaptively-accumulated Knowledge Transfer strategy to discard those target samples with low prediction confidences. That is, only samples with confidently predicted probability label in
are accepted to update the cross-domain alignment, where is the pseudo label of , and is the probability confidence from prototype classifier of sample belonging to class . is the threshold. It is noteworthy that we do not need to add another hyper-parameter to tune the model, as the probability confidence measures the similarity between the target sample to the source domain, we can let the model learn adaptively by setting it as the average of initial probability prediction produced by prototype classifier of source domain samples belonging to ground-truth class, which is , where is the ground-truth label of source sample . We only explore highly-confident target samples into the cross-domain alignment. In other words, the selected target samples may not cover the whole label space, which is reasonable and acceptable.
3.2.3. Preserving Inter-class Discrimination
We treat the class-wise embeddings in a different way. Instead of matching the source and target domain mean embeddings from the same category, we seek to enlarge the distance between the source and target domain mean embeddings but from different classes. Specifically, we accept the distance to measure the distribution difference between two embeddings from two classes () and two domains ():
where denotes the embedding feature matrix composed of and , and denotes the class center of data from category domain .
It is noteworthy that and could be the same because we also seek to maximize the class-wise distance between different categories within the same domain. On the contrary, and are always different. The integrated inter-class discriminative alignment loss term includes TWO parts: (1) Aligning within source/target domain (2) Aligning across domains, which is shown as Eq. (4):
where is a hyper-parameter to balance the contribution of within-domain and between-domain terms in . It is noteworthy that here is the number of categories in the whole domain label space only when we align the inter-class discriminative distribution within source domain (), i.e., . In other situations (, ), is the number of categories of the filtered out target domain subset , which may be smaller than the number of categories in the whole source domain label space, due to the Adaptively-Accumulated Knowledge Transfer strategy we proposed to filter out target samples with high prediction confidence.
3.2.4. Pursuing Intra-class Compactness
Except for maximizing the inter-class distribution distance within/across domains, we also seek to pursue more intra-class compactness. Specifically, we develop an effective loss term to reduce the intra-class variation by minimizing the distance between every two samples belonging to the same category from any domains, which is shown as:
where is the total number of samples belonging to class from the source domain and filtered out target samples. Thus, we further define the total loss of all intra-class sample-wise distance as:
where is the number of categories in the source domain label space. It is noteworthy that for the target domain, we still only align those samples filtered out with high confidence to reduce the distraction of misclassification, while for samples from the source domain are always aligned over the whole label space. is a hyper-parameter to balance the contribution of .
|DAN (Long et al., 2015)||59.320.49||61.780.56||67.640.29||90.450.36||74.950.67||73.900.38||71.340.46|
|DANN (Ganin et al., 2016)||73.560.15||81.530.23||86.120.15||98.730.20||82.780.18||96.270.26||86.500.20|
|ADDA (Tzeng et al., 2017)||75.670.17||83.410.17||84.250.13||99.850.12||83.620.14||95.380.23||87.030.16|
|RTN (Long et al., 2016)||78.980.55||77.070.49||89.460.37||85.350.47||89.250.39||93.220.52||85.560.47|
|IWAN (Zhang et al., 2018a)||89.150.37||90.450.36||94.260.25||99.360.24||95.620.29||99.320.32||94.690.31|
|SAN (Cao et al., 2018a)||90.900.45||94.270.28||88.730.44||99.360.12||94.150.36||99.320.52||94.960.36|
|PADA (Cao et al., 2018b)||96.540.31||82.170.37||95.410.33||100.00.00||92.690.29||99.320.45||92.690.29|
|DRCN (Li et al., 2020a)||90.80||94.30||94.80||100.00||95.20||100.00||95.90|
|ETN (Cao et al., 2019)||94.520.20||95.030.22||94.640.24||100.00.00||96.210.27||100.00.00||96.730.16|
|DAN (Long et al., 2015)||58.780.43||54.760.44||67.290.20||92.780.28||55.420.56||85.860.32||69.150.37|
|DANN (Ganin et al., 2016)||50.850.12||57.960.20||62.320.12||94.270.16||51.770.14||95.230.24||68.730.16|
|ADDA (Tzeng et al., 2017)||53.280.15||58.780.12||63.340.08||95.360.08||50.240.10||94.330.18||69.220.12|
|RTN (Tzeng et al., 2017)||69.350.42||75.430.38||82.980.36||99.590.32||81.450.32||98.420.48||84.540.38|
|IWAN (Zhang et al., 2018a)||82.900.31||90.950.33||93.360.22||88.530.16||89.570.24||79.750.26||87.510.25|
|SAN (Cao et al., 2018a)||83.390.36||90.700.20||91.850.35||100.00.00||87.160.23||99.320.45||92.070.27|
|PADA (Cao et al., 2018b)||86.050.36||81.730.34||95.260.27||100.00.00||93.000.24||99.420.24||92.540.24|
|ETN (Cao et al., 2019)||85.660.16||89.430.17||92.280.20||100.00.00||95.930.23||100.00.00||93.880.13|
3.3. Overall Objective and Optimization
Entropy minimization regularization is adopted to eliminate the side effect caused by the uncertainty of classifiers, due to the large domain shift and samples which are hard to transfer. Especially during the early training stage, the target domain samples are easy to be assigned to wrong categories and may deteriorate the optimization procedures. We also explore the entropy minimization regularization as:
where is the number of categories in source domain label space, is the number of samples from the target domain.
To sum up, we propose our overall objective function as:
The whole framework consists of a feature generator , a multilayer perceptron classifier , and a prototype classifier . As is non-parameter, so only and are optimized with the objective as Eq. (8). Specifically, is calculated on the source domain data, while is based on the whole target domain. However, and are only based on the filtered out target data, as well as the corresponding source data from the same categories as the filtered target samples pseudo labels.
4.1. Datasets & Implementation Details
Office-31 (Saenko et al., 2010) consists of more than 4,000 images from 31 categories office common objects. The dataset includes 3 different domains: Amazon, Webcam, and DSLR. Following the protocol of (Cao et al., 2018a), 9 different partial domain adaptation tasks are explored. For each target domain, we select the 10 shared categories across Office-31 and Caltech-256 (Griffin et al., 2007) dataset and denoted as A10, W10, and D10. The source domain data takes the whole domain data space and denoted as A31, W31, and D31.
Office-Home (Venkateswara et al., 2017) is a much larger benchmark containing 65 different class images from 4 domains: Ar (Art), Cl (Clipart), Pr (Product), and Rw (RealWorld). Following the existing evaluation settings (Cao et al., 2018b, 2019), we have 12 partial domain adaptation tasks. From each target domain, we only select the first 25 categories in alphabetical order, while the source domain utilizes all 65 class images.
|DAN (Long et al., 2015)||43.76||67.90||77.47||63.73||58.99||67.59||56.84||37.07||76.37||69.15||44.30||77.48||61.72|
|DANN (Ganin et al., 2016)||45.23||68.79||79.21||64.56||60.01||68.29||57.56||38.89||77.45||70.28||45.23||78.32||62.82|
|ADDA (Tzeng et al., 2017)||45.23||68.79||79.21||64.56||60.01||68.29||57.56||38.89||77.45||70.28||45.23||78.32||62.82|
|RTN (Long et al., 2016)||49.31||57.70||80.07||63.54||63.47||73.38||65.11||41.73||75.32||63.18||43.57||80.50||63.07|
|IWAN (Zhang et al., 2018a)||53.94||54.45||78.12||61.31||47.95||63.32||54.17||52.02||81.28||76.46||56.75||82.90||63.56|
|SAN (Cao et al., 2018a)||44.42||68.68||74.60||67.49||64.99||77.80||59.78||44.72||80.07||72.18||50.21||78.66||65.30|
|PADA (Cao et al., 2018b)||51.95||67.00||78.74||52.16||53.78||59.03||52.61||43.22||78.79||73.73||56.60||77.09||62.06|
|DRCN (Li et al., 2020a)||54.00||76.40||83.00||62.10||64.50||71.00||70.80||49.80||80.50||77.50||59.10||79.90||69.00|
|ETN (Cao et al., 2019)||59.24||77.03||79.54||62.92||65.73||75.01||68.29||55.37||84.37||75.72||57.66||84.54||70.45|
Comparisons: We compare the performance of our proposed method with several domain adaptation and the state-of-the-art partial DA methods such as: Deep Adaptation Network (DAN) (Long et al., 2015), Adversarial Discriminative Domain Adaptation (ADDA) (Tzeng et al., 2017), Residual Transfer Network (RTN) (Long et al., 2016), Importance Weighted Adversarial Nets (IWAN) (Zhang et al., 2018a), Selective Adversarial Network (SAN) (Cao et al., 2018a), Partial Adversarial Domain Adaptation (PADA) (Cao et al., 2018b), Example Transfer Network (ETN) (Cao et al., 2019), and Adaptive Feature Norm (AFN) (Xu et al., 2019). Specifically, DAN applies multi-kernel MMD to match source and target domain distribution and learn transferable features across the domain. ADDA combines the adversarial training idea and united weights sharing to generate domain invariant features. RTN jointly adapts features distribution as well as source and target classifiers via deep residual learning framework. IWAN and SAN select or re-weight outlier categories in source domain label space to alleviate the negative influence caused by those classes that are not in the target domain label space. PADA, ETN, and AFN are the state-of-the-art partial domain adaptation models. Through down-weighting source domain data which is from outlier categories, PADA reduces the negative transfer influence caused by outlier classes. ETN proposes a progressive weighting scheme to quantify the transferability of source examples. AFN proposes a parameter-free approach to progressively adapt the source and target domain feature norms to a large range of values, which results in significant transfer gains.
: For each source-target pair case, we finetune the ImageNet pre-trained convolutional neural networks on the source domain and remove the last fully-connected layer as the backbone network. Then we input the backbone networks output of all source and target domain data into two dense layers with hidden layer output as 1,024 followed by ReLU activation and 0.1 dropout probability as the feature extractor. We accept ResNet-50 network (He et al., 2016) as the backbone on Office-Home and Office-31, and also explore the performance of VGG network as the backbone (Simonyan and Zisserman, 2014) on Offce-Home dataset. The output dimension of the generator , as known as the embedding features , is 512. The multilayer perceptron classifier is a two-layer fully-connected neural network where the hidden layer output dimension is 512, and the output size is the number of source domain categories. For prototype classifier
, we take cosine similarity as the measurement function in
, and we directly take the source domain class centers as the prototypes, because the feature generator update every epoch, so the prototypes are also updating along with training. All experiments are implemented via PyTorch. We train the model for 100 epochs by Adam optimizer with learning rate as 0.0001, and report the last epoch results.is rounded to two decimal places. and on Office31 dataset, while , on Office-Home. We will analyze the parameter sensitivity in Section 4.3.
4.2. Comparison Results
In this section, we will comprehensively evaluate our proposed model with several baselines on Office-31 and Office-Home benchmarks in terms of the target samples labels prediction accuracy to manifest the effectiveness of our model.
Specifically, we observe that PDA methods (IWAN, SAN, PADA, DRCN, and ETN) achieve better performance than standard DA efforts such as DAN, DANN, ADDA, and RTN. ETN achieves much greater improvement because it introduces a method to quantify the source samples transferability. Our proposed method can still outperform all compared baselines on most partial domain adaptation tasks and obtain the best average performance.
Table 1 reports the classification accuracy on the Office-31 dataset obtained by all baselines and our model with ResNet-50 as the backbone of feature extractor. It is noteworthy that the prototype classifier always generates better performance than the conventional multilayer perceptron classifier . From the results, the prototype classifier achieves the best performance on 5 out of 6 tasks, compared to all the other baselines. To be specific, the average classification accuracy reaches the best performance , and reaches accuracy on W31 D10 and D31 W10.
Moreover, we also explore the VGG network as the feature extractor backbone on Office-31 dataset and report the results in table 2. Our proposed model achieves the best average performance compared with other baselines. Specifically, compared to the best baseline performance on task A31W10, PADA, and improve the accuracy over 2% to 88.44% and 4% to 90.48%, respectively. It is noteworthy that the improvements of performance with VGG networks as backbone is more significant than using ResNet-50, because the ResNet-50 is more advanced deep convolutional neural networks model, which can generate more task specific discriminate features than VGG networks.
Experiment results on the Office-Home dataset are stated in Table 3. Both and obtain better performance against other baselines with significant improvements on average classification accuracy ( and ). Moreover, our proposed method achieves more than accuracy increase compared to the state-of-the-art baseline, e.g., Ar Pr, Cl Pr, etc.
4.3. Ablation Analysis
First, visualize the generator output features before and after the domain adaptation process on task ArCl on Office-Home, and AW on Office-31 dataset in the Fig. 2 (a) and (b). From the results, we observe that our proposed method aligns the source and target domain samples with respect to categories, and tights the compactness of the embedding features to each class centers.
Secondly, we evaluate the contribution of every loss term in Eq. (8) by removing each specific term while keeping other terms as the original framework. The results are shown in Fig. 3. It is noteworthy that both and make crucial contribution to the PDA tasks because these two terms are aligning the data distribution inter-classes and intra-class. keeps the model performance on the source domain stable, while it has limited contribution to the PDA process, but cannot be ignored. helps to mitigate the negative transfer influence of the multilayer perceptron classifier , especially at the beginning of the training stage.
Then, we monitor the training and optimization process of our model. Fig. 4 illustrates the process of the adaptively-accumulated knowledge transfer process. We choose case A31 W10 of Office-31 dataset and show the changing of the filtered out high prediction confidence categories used to align the data distribution across domains. In the beginning, high prediction target samples only spread in only 6 classes, but then more and more categories are involved, and the number finally reaches 11, while the total number of the target domain categories is 10. Although there is an incorrect outlier class involved, the adaptive optimization strategy still significantly narrows the range of the target domain label space.
Moreover, we implement several ablation experiments on the Office-Home dataset with different training details to explore the contribution of our proposed model and optimization strategy, the results are reported in Table 4. ”No Adaptive” denotes the results without the adaptively accumulating knowledge transfer and target samples filtering out process. From the results, compared to our complete AKT model results, we notice how important the adaptively accumulating knowledge strategy is. ” Guide” are the results when we use the probabilistic prediction to filter out high confidence target samples for domain alignment, instead of . The way to decide the threshold is the same as when we use . The results prove that the multilayer perceptron classifier and the prototype classifier have different classification philosophy, and using probability prediction to accumulate can boost the performance significantly. Finally, we explore the motivation of adopting two different type dual classifiers framework in our model by setting and both same structure multilayer perceptron classifiers, all other settings and training strategies are the same as before, and the results are reported in ”Same ”. From the results, we observe that for some cases two same multilayer perceptron classifiers can get slightly better performance than our model, e.g., Ar Rw and Cl Rw. However, for most cases and the average performance, our model with different type classifiers outperforms much more. All the results with different training strategies in Table 4 demonstrate the effectiveness and motivation of our model and optimization strategies.
We present the parameter sensitivity analysis in Fig. 5. We vary from 0.0001 to 0.05 and from 1 to 3 on four cases on the Office-Home dataset (Ar Pr, Ar Rw, Pr Ar, Rw Cl) to analyze if the model is sensitive to the change of the hyper-parameters. The results in Fig. 5 shows that our model has great stability across cases of the two parameters and .
Finally, we select several representative target samples from task PrRw on Office-Home dataset and show the predictions of and in Fig. 6. We notice that some cases only or can handle, or even neither can predict correctly, which demonstrates the motivation of combine two different type classifiers and
in our proposed model. Besides, we operate the image retrieval task by giving specific labels to retrieve the target samples. The 5 target images with the highestprediction confidence and 5 with the lowest in the retrieved images are shown in Fig. 7. The different samples retrieved by and demonstrate the motivation of integrating various classifiers.
This paper presented a novel Domain-Invariant Feature learning framework for partial domain adaptation. With the help of the Adaptively-Accumulated Knowledge Transfer Optimization strategy, the target domain samples with high confidence and task-relevant source categories are selected out adaptively. By maximizing the inter-class center-wise discrepancy and minimizing the intra-class sample-wise compactness, more domain-invariant and task-specific discriminative representations will be extracted. Extensive experiments on several partial domain adaptation benchmarks manifest the superiority of our algorithms against previous works.
- Representation learning: a review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35 (8), pp. 1798–1828. Cited by: §2.1.
Domain adaptation with structural correspondence learning.
Proceedings of the 2006 conference on empirical methods in natural language processing, pp. 120–128. Cited by: §1.
- Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22 (14), pp. e49–e57. Cited by: §1.
Unsupervised pixel-level domain adaptation with generative adversarial networks. In , pp. 3722–3731. Cited by: §1.
- Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2724–2732. Cited by: §1, §2.2, §3.1, Table 1, Table 2, §4.1, §4.1, Table 3.
- Partial adversarial domain adaptation. In Proceedings of the European Conference on Computer Vision, pp. 135–150. Cited by: §1, §2.2, Table 1, Table 2, §4.1, §4.1, Table 3.
- Learning to transfer examples for partial domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2985–2994. Cited by: §1, §2.2, Table 1, Table 2, §4.1, §4.1, Table 3.
- Translated learning: transfer learning across different feature spaces. In Advances in neural information processing systems, pp. 353–360. Cited by: §1.
- Robust multi-view representation: a unified perspective from multi-view learning to domain adaption.. In IJCAI, pp. 5434–5440. Cited by: §1.
- Decaf: a deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pp. 647–655. Cited by: §2.1.
- Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17 (1), pp. 2096–2030. Cited by: §2.1, Table 1, Table 2, Table 3.
- Caltech-256 object category dataset. Cited by: §4.1.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §4.1.
- LSDA: large scale detection through adaptation. In Advances in neural information processing systems, pp. 3536–3544. Cited by: §2.1.
- Cycada: cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213. Cited by: §1.
- Deep low-rank sparse collective factorization for cross-domain recommendation. In Proceedings of the 25th ACM international conference on Multimedia, pp. 163–171. Cited by: §1.
- Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 10285–10295. Cited by: §1.
- Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 10285–10295. Cited by: §2.1.
- Cycle-consistent conditional adversarial transfer networks. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 747–755. Cited by: §1.
- Deep residual correction network for partial domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1. Cited by: Table 1, Table 3.
- Joint adversarial domain adaptation. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 729–737. Cited by: §1, §2.1.
- Domain conditioned adaptation network. In Thirty-Fourth AAAI Conference on Artiﬁcial Intelligence, pp. . Cited by: §1.
- Learning transferable features with deep adaptation networks. In International conference on machine learning, pp. 97–105. Cited by: §1, §2.1, Table 1, Table 2, §4.1, Table 3.
- Unsupervised domain adaptation with residual transfer networks. In Advances in neural information processing systems, pp. 136–144. Cited by: §1, §2.1, Table 1, §4.1, Table 3.
- Deep transfer learning with joint adaptation networks. In International conference on machine learning, pp. 2208–2217. Cited by: §3.2.2.
- Label efficient learning of transferable representations acrosss domains and tasks. In Advances in neural information processing systems, pp. 165–177. Cited by: §1.
- Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1717–1724. Cited by: §2.1.
- A new approach to cross-modal multimedia retrieval. In Proceedings of the 18th ACM international conference on Multimedia, pp. 251–260. Cited by: §1.
- Adapting visual category models to new domains. In Proceedings of the European Conference on Computer Vision, pp. 213–226. Cited by: §1, §4.1.
- Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3723–3732. Cited by: §2.1.
- Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3723–3732. Cited by: §1.
- Weakly-shared deep transfer networks for heterogeneous-domain knowledge propagation. In Proceedings of the 23rd ACM international conference on Multimedia, pp. 35–44. Cited by: §1.
- Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1, §4.1.
- Prototypical networks for few-shot learning. In Advances in neural information processing systems, pp. 4077–4087. Cited by: §3.1.
- Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4068–4076. Cited by: §1, §2.1.
- Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, Vol. 1, pp. 4. Cited by: §1, §3.2.2, Table 1, Table 2, §4.1, Table 3.
- Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5018–5027. Cited by: §4.1.
- Visual domain adaptation with manifold embedded distribution alignment. In Proceedings of the 26th ACM international conference on Multimedia, pp. 402–410. Cited by: §1.
-  EV-action: electromyography-vision multi-modal action dataset. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG), pp. 129–136. Cited by: §1.
- Structure preserving generative cross-domain learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, Cited by: §1.
- Larger norm more transferable: an adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1426–1435. Cited by: §4.1.
Image classification by cross-media active learning with privileged information. IEEE Transactions on Multimedia 18 (12), pp. 2494–2502. Cited by: §1.
- Heterogeneous domain adaptation via soft transfer network. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 1578–1586. Cited by: §1.
- How transferable are features in deep neural networks?. In Advances in neural information processing systems, pp. 3320–3328. Cited by: §2.1.
- Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8156–8164. Cited by: §1, §2.2, §3.1, Table 1, Table 2, §4.1, Table 3.
- Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3801–3809. Cited by: §1.
- Domain-symmetric networks for adversarial domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5031–5040. Cited by: §1, §2.1.
- Deep unsupervised convolutional domain adaptation. In Proceedings of the 25th ACM international conference on Multimedia, pp. 261–269. Cited by: §1.