I Introduction
In many realworld applications, data are generally collected under different conditions, thus hardly satisfying the identical probability distribution hypothesis which is known as a foundation of statistical learning theory. This situation naturally leads to a crucial issue that a classifier trained on a wellannotated source domain cannot be applied to a related but different target domain directly. To surmount this issue, as an important branch of transfer learning, considerable efforts have been devoted to domain adaptation
[1]. By far, domain adaptation has been a fundamental technology for crossdomain knowledge discovery, and been considered in various tasks, such as object recognition [2, 3][4, 5] and person reidentification [6].The major issue for domain adaptation is how to reduce the difference in distributions between the source and target domains [7]. Most of recent works aim to seek a common feature space where the distribution difference across domains are minimized [8, 9, 10, 11, 12]. To achieve this goal, various metrics have been proposed to measure the distribution discrepancy, among which the Maximum Mean Discrepancy (MMD) [13]
is probably the most widely used one. The typical procedure for MMD based methods includes three key steps in each iteration: 1) projecting the original source and target data to a common feature space; 2) training a standard supervised learning algorithm on projected source domain; 3) assigning pseudolabels to target data with the source classifier. Generally, this procedure makes label prediction for target samples independently, while ignores the data distribution structure of two domains that can be crucial to the pseudolabel assignment of target data.
To illustrate this more explicitly, a toy example is shown in Fig. 1
. The red line is a discriminant hyperplane trained on source data in the projected feature space. As we can see, the hyperplane tends to misclassify the target data due to the distribution discrepancy between two domains. In such case, the misclassified samples will seriously mislead the learning of the common feature space in the subsequent iterations, and ultimately cause significant performance drop. Actually, from the perspective of sample distribution in two domains, the class centroids in target domain can be readily matched to their corresponding class centroids in source domain. Motivated by this insight, in this paper, instead of labeling target samples individually, we aim to introduce a novel approach that assigns pseudolabels to target samples with the guidance of class centroids in two domains, such that the data distribution structure of both source and target domains can be emphasized.
To achieve this goal, the first key issue to be handled is how to determine the class centroids of target domain under the situation where the labels are absent. For this problem, we resort to the classical means clustering algorithm [14] which has been widely used to partition unlabeled data into several groups where similar samples in the same group can be represented by a specific cluster prototype. Intuitively, the cluster prototypes obtained by means algorithm can be regarded as a good approximation for the class centroids of target domain. After obtaining the cluster prototypes of target data, the distribution discrepancy minimization problem in domain adaptation can be reformulated as the class centroid matching problem which can be solved efficiently by the nearest neighbor search.
Clearly, in the process of cluster prototype learning of target data, the quality of cluster prototypes can be vital to the performance of our approach. Actually, it has been shown that the clustering performance can be significantly enhanced if the local manifold structure is exploited [15, 16]. Nevertheless, most of existing manifold learning methods highly depend on the predefined similarity matrix built in the original feature space [17, 18]
, and thus may fail to capture the inherent local structure of highdimensional data due to the curse of dimensionality. To tackle this problem, inspired by the recently proposed adaptive neighbors learning method
[19], we introduce a local structure selflearning strategy into our proposal. Specifically, we learn the data similarity matrix according to the local connectivity in the projected lowdimensional feature space rather than the original highdimensional space, such that the intrinsic local manifold structure of target data can be captured adaptively.Based on above analysis, a novel domain adaptation method, which can adequately exploit the data distribution structure by jointly class Centroid Matching and local Manifold Selflearning (CMMS), is naturally proposed. It is noteworthy that, more recently, the need for tackling semisupervised domain adaptation (SDA) problem is growing as there may be some labeled target samples in practice [20, 21, 22, 23, 24]. While unsupervised domain adaptation (UDA) methods are well established, most of them cannot be naturally applied to the semisupervised scenario. Excitingly, the proposed CMMS can be extended to SDA including both homogeneous and heterogeneous settings in a direct but elegant way. The flowchart of our proposed CMMS is shown in Fig. 2. The main contributions of this paper are summarized as follows:

We propose a novel domain adaptation method called CMMS, which can thoroughly explore the structure information of data distribution via jointly class centroid matching and local manifold selflearning.

We present an efficient optimization algorithm to solve the objective function of the proposal, with theoretical convergence guarantee.

In addition to unsupervised domain adaptation, we further extend our approach to the semisupervised scenario including both homogeneous and heterogeneous settings.

We conduct extensive evaluation of our method on five benchmark datasets, which validates the superior performance of our method in both unsupervised and semisupervised manners.
The rest of this paper is organized as follows. Section II previews some related literature. Section III shows our proposed method, the optimization algorithm, the convergence and complexity analysis. We describe our semisupervised extension in Section IV. Massive experimental results are shown in Section V. Finally, we conclude this paper in Section VI.
Ii Related Work
In this section, we review some previous works closely related to this paper. First, we briefly review the unsupervised domain adaptation methods. Next, related studies of semisupervised domain adaptation are reviewed. Finally, we introduce some local manifold learning techniques.
Iia Unsupervised Domain Adaptation
Unsupervised domain adaptation aims to handle the scenario where labeled samples are only available from the source domain and there exists different distributions between source and target domains. In the past decades, numerous of methods have been proposed to overcome the distribution discrepancy.
Existing UDA methods can be classified as: 1) instance reweighting [25, 26], 2) classifier adaptation [7, 27], and 3) feature adaptation [8, 9, 10, 11]. We refer the interested readers to [28], which contains an excellent survey. Our proposal falls into the third category, i.e., feature adaptation, which addresses domain shift by either searching intermediate subspaces to achieve domain transfer [10, 11] or learning a common feature space where the source and target domains have similar distributions [8, 9]. In this paper, we focus on the latter line. Among existing works, TCA [8] is a pioneering approach, which learns a transformation matrix to align marginal distribution between two domains via MMD. Later, JDA [9] considers conditional distribution alignment by forcing the class means to be close to each other. In the subsequent research, several works further propose to employ the discriminative information to facilitate classification performance. For instance, Li et al. [29] utilize the discriminative information for the source and target domains by encouraging intraclass compactness and interclass dispersion. Liang et al. [30] achieve this goal by promoting class clustering. Despite the promising performance, all the above methods classify target samples independently, which may cause misclassification since the structure information of data distribution is ignored.
To tackle this issue, several recent works attempt to exploit the data distribution structure via clustering. For example, Liang et al. [31]
propose to seek a subspace where the target centroids are forced to approach those in the source domain. Inspired by the fact that the target samples are well clustered in the deep feature space, Wang
et al. [32] propose a selective pseudolabeling approach based on structured prediction. Note that, the basic framework of our proposal is completely different from them. More recently proposed SPL [32] is the most relevant to our proposal. Nevertheless, our proposal is significantly different from it. First, the subspace learning and the clustering structure discovery are regarded as two separated steps in SPL, and thus the projection matrix may not be the optimal one for clustering. Second, the local manifold structure information is ignored by SPL, which is crucial to the exploration of target data structure. By contrast, we integrate the projection matrix learning, the means clustering in the projected space, the class centroid matching and the local manifold structure selflearning for target data into a unified optimization objective, thus the data distribution structure can be exploited more thoroughly.IiB Semisupervised Domain Adaptation
Unlike the unsupervised domain adaptation that no labels are available, in practice, a more common scenario is that the target domain contains a few labeled samples. Such scenario leads to a promising research direction, which is referred to the semisupervised domain adaptation.
According to the property of sample features, SDA algorithms are developed in two different settings: 1) homogeneous setting, i.e., the source and target data are sampled from the same feature space; 2) heterogeneous setting, i.e., the source and target data often have different feature dimensions. In the homogeneous setting, the labeled target samples are used in various ways. For example, Hoffman et al. [20] jointly learn the transformation matrix and classifier parameters, forcing the source and target samples with identical label have high similarity. Similarly, Herath et al. [21] propose to learn the structure of a Hilbert space to reduce the dissimilarity between labeled samples and further match the source and target domains via the second order statistics. Recently, based on Fredholm integral, Wang et al. [22]
propose to learn a crossdomain kernel classifier that can classify the labeled target data correctly using square loss function or hinge loss function. In the heterogeneous setting, relieving feature discrepancy and reducing distribution divergence are two inevitable issues
[33]. For the first issue, one incredibly simple approach [23] is to use the original features or zeros to augment each transformed sample into same size. Another natural approach [24] is to learn two projection matrices to derive a domaininvariant feature subspace, one for each domain. Recently, after employing two matrices to project the source and target data to a common feature space, Li et al. [33] employ a shared codebook to match the new feature representations on the same bases. For the second issue, one favorite solution is to minimize the MMD distance of the source and target domains [24, 33]. Additionally, Tsai et al. [34]propose a representative landmark selection approach, which is similar to instance reweighting in the UDA scenario. When we obtain a limited amount of labeled target samples, manifold regularization, an effective strategy for semisupervised learning, has also been employed by several previous works
[35, 36].In contrast to these SDA methods, our semisupervised extension is quite simple and intuitive. To be specific, the labeled target data are used to improve the cluster prototypes learning of the unlabeled target data. Besides, connections between the labeled and unlabeled target data are built, which is a common strategy to develop a semisupervised model. Different from the homogeneous setting where a unified projection is learnt, we learn two projection matrices in the heterogeneous setting like [24]. Notably, the resulting optimization problems in two settings own the same standard formula and can be solved by the same algorithm in UDA scenario with just very tiny modifications.
IiC Local Manifold Learning
The goal of local manifold learning is to capture the underlying local manifold structure of the given data in the original highdimensional space and preserve it in the lowdimensional embedding. Generally, local manifold learning methods contain three main steps: 1) selecting neighbors; 2) computing affinity matrix; 3) calculating the lowdimensional embedding
[37].Local linear embedding [17] and Laplacian eigenmaps [18] are two typical methods. In local linear embedding, the local manifold structure is captured by linearly reconstructing each sample using the corresponding neighbors in the original space and the reconstruction coefficients are preserved in the lowdimensional space. In Laplacian eigenmaps, the adjacency matrix of given data is obtained in the original feature space using Gaussian function. However, the local manifold structure is artificially captured using pairwise distances with heat kernel, which brings relatively weak representation for the ignorance of the properties of local neighbors [37]. Recently, to learn a more reliable adjacency matrix, Nie et al. [19] propose to assign the neighbors of each sample adaptively based on the Euclidean distances in the lowdimensional space. This strategy has been widely utilized in clustering [38]
[39] and feature representation learning [40].In domain adaptation problems, several works have borrowed the advantages of local manifold learning. For example, Long et al. [7] and Wang et al. [27] employ manifold regularization to maintain the manifold consistency underlying the marginal distributions of two domains. Hou et al. [41] and Li et al. [42] use label propagation to predict target labels. However, they all calculate adjacency matrix in the original highdimensional space with the predefined distance measurement, which is unreliable due to the curse of dimensionality. By contrast, our proposal can capture and employ the inherent local manifold structure of target data adaptively, thus lead to superior performance.
Iii Proposed Method
In this section, we first introduce the notations and basic concepts used throughout this paper. Then, the details of our approach are described. Next, an efficient algorithm is designed to solve the optimization problem of our proposal. Finally, the convergence and complexity analysis of the optimization algorithm are given.
Iiia Notations
A domain contains a feature space and a marginal probability distribution , where . For a specific domain, a task consists of a label space and a labeling function , denoted by [1]. For simplicity, we use subscripts and to describe the source domain and target domain, respectively.
We denote the source domain data as , where is a source sample and is the corresponding label. Similarly, we denote the target domain data as , where . For clarity, we show the key notations used in this paper and the corresponding descriptions in Table I.
Notation  Description 

source/target original data  
number of source/target samples  
projection matrix  
target cluster centroids  
target label matrix  
target adjacency matrix  
centering matrix  
identity matrix with dimension 

dimension of original features  
dimension of projected features  
number of shared class  
number of source samples in class  
a matrix of size with all elements as  
a column vector of size with all elements as 
IiiB Problem Formulation
The core idea of our CMMS lies in the emphasis on data distribution structure by class centroid matching of two domains and local manifold structure selflearning for target data. The overall framework of CMMS can be stated by the following formula:
(1) 
The first term is used to match class centroids. is the clustering term for target data in the projected space. is employed to capture the data structure information. is the regularization term to avoid overfitting. Hyperparameters , and are employed to balance the influence of different terms. Next, we will introduce these items in detail.
IiiB1 Clustering for target data
In our CMMS, we borrow the idea of clustering to obtain the cluster prototypes which can be regarded as the pseudo class centroids. In such case, the sample distribution structure information of target data can be acquired. To achieve this goal, various existing clustering algorithms can be our candidates. Without loss of generality, for the sake of simplicity, we adopt the classical means algorithm to get the cluster prototypes in this paper. Thus, we have the following formula:
(2) 
where is the projection matrix, is the cluster centroids of target data, is the cluster indicator matrix of target data which is defined as if the cluster label of is , and otherwise.
IiiB2 Class Centroid Matching of Two Domains
Once the cluster prototypes of target data are obtained, we can reformulate the distribution discrepancy minimization problem in domain adaptation as the class centroid matching problem. Note that the class centroids of source data can be obtained exactly by calculating the mean value of sample features in the identical class. In this paper, we solve the class centroid matching problem by the nearest neighbor search since it is simple and efficient. Specifically, we search the nearest source class centroid for each target cluster centroid, and minimize the sum of distance of each pair of class centroids. Finally, the class centroid matching of two domains is formulated as:
(3) 
where is a constant matrix used to calculate the class centroids of source data in the projected space with each element if , and otherwise.
IiiB3 Local Manifold Structure Selflearning for Target Data
In our proposed CMMS, the cluster prototypes of target samples are actually the approximation of their corresponding class centroids. Hence, the quality of cluster prototypes plays an important role in the final performance of our CMMS. Existing works have proven that the performance of clustering can be significantly improved by the exploiting of local manifold structure. Nevertheless, most of them highly depend on the predefined adjacent matrix in the original feature space, and thus fail to capture the inherent local manifold structure of highdimensional data due to the curse of dimensionality. For this issue, inspired by the recent work [19], we propose to introduce a local manifold selflearning strategy into our CMMS. Instead of predefining the adjacent matrix in the original highdimensional space, we adaptively learn the data similarity according to the local connectivity in the projected lowdimensional space, such that the intrinsic local manifold structure of target data can be captured. The formula of local manifold selflearning is shown as follows:
(4) 
where is the adjacency matrix in target domain and is a hyperparameter. is the corresponding graph laplacian matrix calculated by , where is a diagonal matrix with each element .
The above descriptions have highlighted the main components of our CMMS. Intuitively, a reasonable hypothesis for source data is that the samples in the identical class should be as close as possible in the projected space, such that the discriminative structure information of source domain can be preserved. As one trivial but effective trick, inspired by [29], we formulate this thought as follows:
(5)  
where is the trace operator and
The coefficient is used to remove the effects of different class sizes [43].
For simplicity, we denote and . By combining Eq.(4) and Eq.(5), we obtain a general term which can capture the diverse structure information of both source and target data:
(6) 
Besides, to avoid overfitting and improve the generalization capacity, we further add an norm regularization term to the projection matrix :
(7) 
So far, by combining Eq.(2), (3), (6) and (7), we arrive at our final CMMS formulation:
(8) 
where is an identity matrix of dimension and is centering matrix defined as . The first constraint in (8
) is inspired by principal component analysis, which aims to maximize projected data variance
[9]. For the sake of simplified format, we reformulate the objective function in (8) as the following standard formula:(9) 
where , , , and .
IiiC Optimization Procedure
According to the objective function of our CMMS in Eq.(9), there are four variables , , , that need to be optimized. Since it is not jointly convex for all variables, we update each of them alternatively while keeping the other variables fixed. Specifically, each subproblem is solved as follows:
1. subproblem: When , and are fixed, the optimization problem (9) becomes:
(10) 
By setting the derivative of (10) with respect to as 0, we obtain:
(11) 
2. subproblem: Substituting Eq.(11) into Eq.(9) to replace , we can get the following subproblem:
(12) 
where
. The above problem can be transformed to a generalized eigenvalue problem as follows:
(13) 
where
is a diagonal matrix with each element as a Lagrange Multiplier. Then the optimal solution is obtained by calculating the eigenvectors of Eq.(
13) corresponding to the smallest eigenvalues.3. subproblem: In variable , only needs to be updated. With , and fixed, the optimization problem with regard to is equal to minimizing Eq.(2). Like means clustering, we can solve it by assigning the label of each target sample to its nearest cluster centroid. To this end, we have:
(14) 
4. subproblem: When , and are fixed, the optimization problem with regard to is equal to minimizing Eq.(5). Actually, we can divide it into independent subproblems with each formulated as:
(15) 
where is the th row of . By defining , the above problem can be written as:
(16) 
The corresponding Lagrangian function is:
(17) 
where and are the Lagrangian multipliers.
To explore the data locality and reduce computation time, we prefer to learn a sparse , i.e., only the nearest neighbors of each sample are preserved to be locally connected. Based on the KKT condition, Eq.(17) has a closedform solution:
(18) 
where is the element of matrix , obtained by sorting the entries for each row of in an ascending order. According to [19], we define and set the value of parameter as:
(19) 
Similar to , we also define as the element of matrix which is obtained by sorting the entries for each row of from small to large.
We use a linear SVM^{1}^{1}1https://www.csie.ntu.edu.tw/~cjlin/liblinear/ classifier to initialize the target label matrix. The initial adjacency matrix is obtained by solving each subproblem like (16) in the original space. The detailed optimization steps of CMMS are summarized in Algorithm 1.
IiiD Convergence and Complexity Analysis
IiiD1 Convergence Analysis
We can prove the convergence of the proposed Algorithm 1 via the following proposition:
Proposition 1.
Proof.
Assume that at the th iteration, we get , , , . We denote the value of the objective function in (9) at the th iteration as . In our Algorithm 1, we divide problem (9) into four subproblems (10), (12), (14) and (15), and each of them is a convex problem with respect to their corresponding variables. By solving the subproblems alternatively, our proposed algorithm can ensure finding the optimal solution of each subproblem, i.e., , , , . Therefore, as the combination of four subproblems, the objective function value of (9) in the th iteration satisfies:
(20) 
In light of this, the proof is completed and the algorithm will converge to local solution at least. ∎
IiiD2 Complexity Analysis
The optimization Algorithm 1 of our CMMS comprises four subproblems. The complexity of these four subproblems are induced as follows: First, the cost of initializing is and we ignore the time to initialize since the base classifier is very fast. Then, in each iteration, the complexity to construct and solve the generalized eigenvalue problem (12) for is . The target cluster centroids can be obtained with a time cost of . The complexity of updating the target labels matrix is . The adjacency matrix is updated with the cost of . Generally, we have . Therefore, the overall computational complexity is , where is the number of iteration.
Iv Semisupervised Extension
In this section, we further extend our CMMS to semisupervised domain adaptation including both homogeneous and heterogeneous settings.
Iv1 Homogeneous Setting
We denote the target data as where is the labeled data with the corresponding labels denoted by and is the unlabeled data. In the SDA scenario, except for the class centroids of source data, the few but precise labeled target data can provide additional valuable reference for determining the cluster centroids of unlabeled data. In this paper, we provide a simple but effective strategy to adaptively combine these two kinds of information. Specifically, our proposed semisupervised extension is formulated as:
(21) 
where , , are balanced factors and . has the same definition with . Eq.(21) can be transformed to the standard formula as Eq.(9):
(22) 
where , . In addition to the balanced factors and , the other variables in Eq.(22) can be readily solved with our Algorithm 1. Since the objective function is convex with respect to and , they can be solved easily with the closedform solution: , , where ,
Iv2 Heterogeneous Setting
In the heterogeneous setting, the source and target data usually own different feature dimensions. Our proposed Eq.(21) can be naturally extended to the heterogeneous manner, only by replacing the projection matrix with two separate ones [24]:
(23)  
where is the new feature representations of two domains with the same dimension. By defining , , Eq.(23) can be transformed to the standard formula as Eq.(22), and thus can be solved with the same algorithm.
V Experiments
Dataset  Subsets (Abbr.)  Samples  Feature (Dim)  Classes  
Office31  Amazon (A)  2,817  AlexnetFC(4,096)  31  
DSLR (D)  498  
Webcam (W)  795  
OfficeCaltech10  Amazon (A)  958 

10  
Caltech (C)  1,123  
DSLR (D)  157  
Webcam (W)  295  
MSRCVOC2007  MSRC (M)  1,269  Pixel(256)  6  
VOC2007 (V)  1,530  
OfficeHome  Art (Ar)  2,421  Resnet50(4,096)  65  
Clipart (Cl)  4,379  
Product (Pr)  4,428  
RealWorld (Re)  4,357  

English  18,758  BoW(1,131)  6  
French  26,648  BoW(1,230)  
German  29,953  BoW(1,417)  
Italian  24,039  BoW(1,041)  
Spanish  12,342  BoW(807) 
In this section, we first describe all involved datasets. Next, the details of experimental setup including comparison methods in UDA and SDA scenarios, training protocol and parameter setting are given. Then, the experimental results in UDA scenario, ablation study, parameter sensitivity and convergence analysis are presented. Finally, we show the results in SDA scenario. The source code of this paper is available at https://github.com/LeiTianqj/CMMS/tree/master.
Va Datasets and Descriptions
We apply our method to five benchmark datasets which are widely used in domain adaptation. These datasets are represented with different kinds of features including AlexnetFC, SURF, DeCAF, pixel, Resnet50 and BoW. Table II shows the overall descriptions of these datasets. We will introduce them in detail as follows.
Task  1NN  SVM  GFK  JDA  CORAL  DICD  JGSA  DICE  MEDA  SPL  MSC  CMMS 

AD  59.8  58.8  61.8  65.5  65.7  66.5  69.5  67.5  69.5  69.1  71.9  72.9 
AW  56.4  58.9  58.9  70.6  64.3  73.3  70.4  71.9  69.9  69.9  75.1  74.7 
DA  38.1  48.8  45.7  53.7  48.5  56.3  56.6  57.8  58.0  62.5  58.8  60.7 
DW  94.7  95.7  96.4  98.2  96.1  96.9  98.2  97.2  94.0  97.7  96.7  97.6 
WA  39.8  47.0  45.5  52.1  48.2  55.9  54.2  60.0  56.0  58.2  57.2  60.3 
WD  98.4  98.2  99.6  99.2  99.8  99.4  99.2  100.0  96.8  99.6  99.4  99.6 
Average  64.5  67.9  68.0  73.4  70.4  74.7  74.7  75.7  74.0  76.2  76.5  77.6 
Office31 [44] contains 4,110 images of office objects in 31 categories from three domains: Amazon (A), DSLR (D) and Webcam (W). Amazon images are downloaded from the online merchants. The images from DSLR domain are captured by a digital SLR camera while those from Webcam domain by a web camera. We adopt the AlexNetFC features^{2}^{2}2 https://github.com/VisionLearningGroup/CORAL/tree/master/dataset finetuned on source domain. Following [30], we have 6 crossdomain tasks, i.e., ”AD”,”AW”, …, ”WD”.
OfficeCaltech10 [10] includes 2,533 images of objects in 10 shared classes between Office31 dataset and the Caltech256 (C) dataset. The Caltech256 dataset is a widely used benchmark for object recognition. We use the 800dim SURF features^{3}^{3}3http://boqinggong.info/assets/GFK.zip and 4,096dim DeCAF features^{4}^{4}4 https://github.com/jindongwang/transferlearning/blob/master/data/ [45]. Following [9], we construct 12 crossdomain tasks, i.e., ”AC”, ”AD”, …, ”WD”.
MSRCVOC2007 [25] consists of two subsets: MSRC (M) and VOC2007 (V). It is constructed by selecting 1,269 images in MSRC and 1,530 images in VOC2007 which share 6 semantic categories: aeroplane, bicycle, bird, car, cow, sheep. We utilize the 256dim pixel features^{5}^{5}5 http://ise.thss.tsinghua.edu.cn/~mlong/. Finally, we establish 2 tasks, ”MV” and ”VM”.
OfficeHome [46] involves 15,585 images of daily objects in 65 shared classes from four domains: Art (artistic depictions of objects, Ar), Clipart (collection of clipart images, Cl), Product (images of objects without background, Pr) and RealWorld (images captured with a regular camera, Re). We use the 4,096dim Resnet50 features^{6}^{6}6 https://github.com/hellowangqian/domainadaptationcapls released by [32]. Similarly, we obtain 12 tasks, i.e., ”ArCl”, ”ArPr”, …, ”RePr”.
Multilingual Reuters Collection [47] is a crosslingual text dataset with about 11,000 articles from six common classes in five languages: English, French, German, Italian, and Spanish. All articles are sampled by BoW features^{7}^{7}7 http://archive.ics.uci.edu/ml/datasets/Reuters+RCV1+RCV2+Multilingual,+Multiview+Text+Categorization+Test+collection with TFIDF. Then, they are processed by PCA for dimension reduction and the reduced dimensionality for English, French, German, Italian and Spanish are 1,131, 1,230, 1,417, 1,041 and 807, respectively. We pick the Spanish as the target and each of the rest as the source by turns. Eventually, we gain four tasks.
VB Experimental Setup
VB1 Comparison Methods in UDA Scenario
VB2 Comparison Methods in SDA Scenario
VB3 Training Protocol
For UDA scenario, all source samples are utilized for training like [29]. We exploit score standardization [10] on all kinds of features. For SDA scenario, in homogeneous setting, we use the OfficeCaltech10 and MSRCVOC2007 datasets following the same protocol with [22]. Specifically, for the OfficeCaltech10 dataset, we randomly choose 20 samples per category for amazon domain while 8 for the others as the sources. Three labeled target samples per class are selected for training while the rest for testing. For fairness, we use the train/test splits released by [20]. For the MSRCVOC2007 dataset, all source samples are utilized for training, and 2 or 4 labeled target samples per category are randomly selected for training leaving the remaining to be recognized. In heterogeneous setting, we employ the OfficeCaltech10 and Multilingual Reuters Collection datasets using the experiment setting of [33]. For the OfficeCaltech10 dataset, the SURF and DeCAF features are served as the source and target. The source domain contains 20 instances per class, and 3 labeled target instances per category are selected for training with the rest for testing. For the Multilingual Reuters Collection dataset, Spanish is chose as the target and the remaining as the source by turns. 100 articles per category are randomly selected to build the source domain, and 10 labeled target articles per category are selected for training with 500 articles per class from the rest to be classified.
Task  1NN  SVM  GFK  JDA  CORAL  DICD  JGSA  DICE  MEDA  SPL  MSC  CMMS 

AC  26.0  35.6  41.0  39.4  45.1  42.4  41.5  42.7  43.9  41.2  44.1  39.4 
AD  25.5  36.3  40.7  39.5  39.5  38.9  47.1  49.7  45.9  44.6  55.4  53.5 
AW  29.8  31.9  41.4  38.0  44.4  45.1  45.8  52.2  53.2  58.0  40.3  56.3 
CA  23.7  42.9  40.2  44.8  54.3  47.3  51.5  50.2  56.5  53.3  53.9  61.0 
CD  25.5  33.8  40.0  45.2  36.3  49.7  45.9  51.0  50.3  41.4  46.5  51.0 
CW  25.8  34.6  36.3  41.7  38.6  46.4  45.4  48.1  53.9  61.7  54.2  61.7 
DA  28.5  34.3  30.7  33.1  37.7  34.5  38.0  41.1  41.2  35.3  38.3  46.7 
DC  26.3  32.1  31.8  31.5  33.8  34.6  29.9  33.7  34.9  25.9  31.6  31.9 
DW  63.4  78.0  87.9  89.5  84.7  91.2  91.9  84.1  87.5  82.7  85.4  86.1 
WA  23.0  37.5  30.1  32.8  35.9  34.1  39.9  37.5  42.7  41.1  37.3  40.1 
WC  19.9  33.9  32.0  31.2  33.7  33.6  33.2  37.8  34.0  37.8  33.8  35.8 
WD  59.2  80.9  84.4  89.2  86.6  89.8  90.5  87.3  88.5  83.4  80.9  89.2 
Average  31.4  42.6  44.7  46.3  47.6  49.0  50.0  51.3  52.7  50.5  50.1  54.4 
Task  1NN  SVM  GFK  JDA  CORAL  DICD  JGSA  DICE  MEDA  SPL  MSC  CMMS 

AC  71.7  84.4  77.3  83.2  83.2  86.0  84.9  85.9  87.4  87.4  88.3  88.8 
AD  73.9  83.4  84.7  86.6  84.1  83.4  88.5  89.8  88.1  89.2  91.7  95.5 
AW  68.1  76.9  81.0  80.3  74.6  81.4  81.0  86.4  88.1  95.3  91.5  92.2 
CA  87.3  91.3  88.5  88.7  92.0  91.0  91.4  92.3  93.4  92.7  93.5  94.1 
CD  79.6  85.4  86.0  91.1  84.7  93.6  93.6  93.6  91.1  98.7  90.4  95.5 
CW  72.5  77.3  80.3  87.8  80.0  92.2  86.8  93.6  95.6  93.2  85.1  91.9 
DA  49.9  86.5  85.8  91.8  85.5  92.2  92.0  92.5  93.2  92.9  93.5  93.4 
DC  42.0  77.1  76.0  85.5  76.8  86.1  86.2  87.4  87.5  88.6  89.0  89.3 
DW  91.5  99.3  97.3  99.3  99.3  99.0  99.7  90.0  97.6  98.6  99.3  99.3 
WA  62.5  80.7  81.8  90.2  81.2  89.7  90.7  90.7  99.4  92.0  93.4  93.8 
WC  55.3  72.5  73.9  84.2  75.5  84.0  85.0  85.3  93.2  87.0  88.3  89.0 
WD  98.1  99.4  100.0  100.0  100.0  100.0  100.0  100.0  99.4  100.0  100.0  100.0 
Average  71.0  84.5  83.2  89.1  84.7  89.9  90.0  91.4  92.8  93.0  92.0  93.6 
Task  1NN  SVM  GFK  JDA  CORAL  DICD  JGSA  DICE  MEDA  SPL  MSC  CMMS 

MV  35.5  35.6  34.7  30.4  38.4  32.4  35.2  33.1  35.3  34.7  31.8  31.8 
VM  47.2  51.8  48.9  44.8  54.9  47.8  47.5  46.3  60.1  63.8  66.5  79.1 
Average  41.3  43.7  41.8  37.6  46.7  40.1  41.3  39.7  47.7  49.3  49.2  55.4 
Task  1NN  SVM  GFK  JDA  CORAL  DICD  JGSA  DICE  MEDA  SPL  MSC  CMMS 

ArCl  37.9  42.4  38.7  45.8  47.3  53.0  51.3  49.1  52.1  54.5  53.8  56.2 
ArPr  54.4  61.2  57.7  63.6  69.3  73.6  72.9  70.7  75.3  77.8  78.4  80.8 
ArRe  61.6  69.9  63.0  67.5  74.6  75.7  78.5  73.9  77.6  81.9  78.8  82.8 
ClAr  40.7  42.6  43.3  53.3  54.2  59.7  58.1  51.4  61.0  65.1  64.0  65.9 
ClPr  52.7  56.2  54.6  62.2  67.2  70.3  72.4  65.9  76.5  78.0  75.0  78.7 
ClRe  52.5  57.7  54.2  62.9  67.8  70.6  73.4  65.9  76.8  81.1  78.9  82.2 
PrAr  47.1  48.7  48.0  56.0  55.7  60.9  62.3  60.0  61.8  66.0  64.8  67.7 
PrCl  41.1  41.5  41.6  47.1  43.0  49.4  50.3  48.6  53.4  53.1  52.3  54.5 
PrRe  66.7  70.6  66.8  72.9  73.9  77.7  79.4  76.2  79.5  82.8  79.9  82.9 
ReAr  57.1  61.6  58.1  61.8  64.2  67.9  67.9  65.4  68.1  69.9  67.0  69.5 
ReCl  45.1  45.7  45.0  50.5  49.2  56.2  53.4  53.5  55.1  55.3  55.8  57.1 
RePr  72.9  76.1  72.8  75.2  78.0  79.7  80.4  78.8  82.5  86.0  80.3  85.2 
Average  52.5  56.2  53.6  59.9  62.0  66.2  66.7  63.3  68.3  71.0  69.1  72.0 
Dataset  JDA  CMMS  CMMS  CMMS  CMMS  CMMS 

Office31  73.4  74.6  75.8  76.2  74.8  77.6 
OfficeCaltech10(SURF)  46.3  51.4  52.4  53.2  51.8  54.4 
OfficeCaltech10(DeCAF)  89.1  92.8  92.9  93.2  92.3  93.6 
MSRCVOC2007  37.6  53.2  53.5  55.1  54.9  55.4 
OfficeHome  61.9  70.2  70.7  71.2  70.6  72.0 
Average  61.7  68.4  69.1  69.8  68.9  70.6 
VB4 Parameter Setting
In both UDA and SDA scenarios, we do not have massive labeled target samples, so we can not perform a standard crossvalidation procedure to obtain the optimal parameters. For a fair comparison, we cite the results from the original papers or run the code provided by the authors. Following [29], we gridsearch the hyperparameter space and report the best results. For GFK, JDA, DICD, JGSA, DICE and MEDA, the optimal reduced dimension is searched in . The best value of regularization parameter for projection is searched in the range of . For the two recent methods, SPL and MSC, we adopt the default parameters used in their public codes or follow the procedures for tuning parameters according to the corresponding original papers. For our method, we fix , and leaving , tunable. We obtain the optimal parameters by searching , .
VC Unsupervised Domain Adaptation
VC1 Experimental results on UDA
Results on Office31 dataset. Table III summarizes the classification results on the Office31 dataset, where the highest accuracy of each crossdomain task is boldfaced. We can observe that CMMS has the best average performance, with a 1.1 improvement over the optimal competitor MSC. CMMS achieves the highest results on 2 out of 6 tasks, while MSC only works the best for task AW with just 0.4 higher than CMMS. Generally, SPL, MSC and CMMS perform better than those methods that classify target samples independently, which demonstrates that exploring the structure information of data distribution can facilitate classification performance. However, compared with SPL and MSC, CMMS further mines and exploits the inherent local manifold structure of target data to promote cluster prototypes learning, thus can lead to a better performance.
Results on OfficeCaltech10 dataset. The results on OfficeCaltech10 dataset with SURF features are listed in Table IV. Regarding the average accuracy, CMMS shows a large advantage which improves 1.7 over the second best method MEDA. CMMS is the best method on 4 out of 12 tasks, while MEDA only wins two tasks. On CA, DA and CW, CMMS leads MEDA by over 4.5 margin. Following [32], we also employ the DeCAF features, and the classification results are shown in Table V. CMMS is superior to all competitors with regard to the average accuracy and works the best or second best for all tasks except for CW. Carefully comparing the results of SURF features and DeCAF features, we can find that SPL and MSC prefer to deep features. Nevertheless, CMMS does not have such a preference, which illustrates that CMMS owns better generalization capacity.
Results on MSRCVOC2007 dataset. The experimental results on the MSRCVOC2007 dataset are reported in Table VI. The average classification accuracy of CMMS is 55.4, which is significant higher than those of all competitors. Especially, on task VM, CMMS gains a huge performance improvement of 15.3 compared with the second best method SPL, which verifies the significant effectiveness of our proposal.
Results on OfficeHome dataset. For fairness, we employ the deep features recently released by [32]
, which are extracted using the Resnet50 model pretrained on ImageNet. Table
VII summarizes the classification accuracies. CMMS outperforms the second best method SPL in average performance, and achieves the best performance on 10 out of all 12 tasks while SPL only works the best for task RePr with just 0.8 superiority to CMMS. This phenomenon shows that even the target samples are well clustered within the deep feature space, exploiting the inherent local manifold structure is still crucial to the improvement of the classification performance.VC2 Ablation Study
To understand our CMMS more deeply, we propose four variants of CMMS: a) CMMS, only considers class Centroid Matching for two domains, i.e., the combination of Eq.(2), Eq.(3) and Eq.(7); b) CMMS, does not utilize the local manifold structure of target data, i.e., removing Eq.(4) from our objective function Eq.(8); c) CMMS, considers local manifold structure of target data by Predefining Adjacency matrix in the original feature space, i.e, replacing Eq.(4) with the Laplacian regularization; d) CMMS, exploits the Discriminative Structure information of target domain via assigning pseudolabels to target data and then minimizing the intraclass scatter in the projected space like Eq.(5). Table VIII shows the results of CMMS and all variants. The results of classical JDA are also provided. Based on this table, we will analyze our approach in more detail as follows.
Effectiveness of class centroid matching. CMMS consistently precedes JDA on five datasets, which confirms the remarkable superiority of our proposal to the MMD based pioneering approach. By the class centroid matching strategy, we can make full use of the structure information of data distribution, thus target samples are supposed to present favourable cluster distribution. To have a clear illustration, in Fig. 3, we display the SNE [50] visualization of the target features in the projected space on task VM of MSRCVOC2007 dataset. We can observe that JDA features are mixed together while CMMS features are wellseparated with cluster structure, which verifies the significant effectiveness of our class centroid matching strategy.
Task  SVM  SVM  MMDT  DTMKLf  CDLS  ILS  TFMKLS  TFMKLH  CMMS 

AC  31.1  42.4  36.4  40.5  37.1  43.6  43.8  43.5  43.0 
AD  56.9  47.7  56.7  45.9  61.9  49.8  62.0  57.3  61.8 
AW  62.8  50.1  64.6  47.9  69.3  59.7  70.9  71.4  69.2 
CA  44.7  47.2  49.4  47.3  52.5  55.1  54.2  54.4  57.7 
CD  56.3  52.0  56.5  52.2  59.8  56.2  60.1  60.4  59.1 
CW  60.0  54.5  63.8  54.4  68.7  62.9  68.1  68.6  67.2 
DA  44.7  44.3  46.9  41.6  51.8  55.0  53.1  50.8  55.2 
DC  31.3  36.8  34.1  36.0  36.9  41.0  38.9  37.9  39.5 
DW  62.0  80.6  74.1  77.6  70.7  80.1  79.1  76.7  82.5 
WA  45.4  45.2  47.7  45.3  52.3  54.3  54.4  54.0  53.4 
WC  29.7  36.1  32.2  36.3  35.1  38.6  36.2  34.9  37.7 
WD  56.5  71.2  67.0  69.6  61.3  70.8  69.1  69.3  72.9 
Average  48.4  50.7  52.5  49.6  54.8  55.6  57.5  56.6  58.3 
Effectiveness of local manifold selflearning strategy for target data. CMMS performs better than CMMS on all datasets, which indicates that exploiting the local manifold structure of target samples help to classify them more successfully, even though the manifold structure is not so reliable. However, if we can capture it more faithfully, we can achieve a superior performance, which is verified by comparing CMMS with CMMS. For a better understanding, we show the visualization of target adjacency matrix on task AD (SURF) in Fig. 4. These matrices are obtained by either the selflearned distance or the predefined distances which include Euclidean distance, heatkernel distance with kernel width 1.0 and cosine distance. As we can see from Fig. 4, all predefined distances tend to incorrectly connect unrelated samples, and hardly capture the inherent local manifold structure of target data. However, the selflearned distance can adaptively build the connections between intrinsic similar samples, thus can improve the classification performance. Generally, CMMS performs much worse than CMMS and even worse than CMMS which verifies that utilizing the discriminative information of target domain via assigning pseudolabels to target samples independently is far from enough to achieve satisfactory results. The reason is that the pseudolabels may be inaccurate and could cause error accumulation during learning, and thus the performance is degraded dramatically. In summary, our local manifold selflearning strategy can effectively enhance the utilization of structure information contained in target data.
Task  SVM  SVM  MMDT  DTMKLf  CDLS  ILS  TFMKLS  TFMKLH  CMMS  

n = 2  MV  28.9  38.5  35.1  36.8  30.2  34.2  38.2  36.6  35.8 
VM  55.9  55.5  59.9  64.1  55.8  49.9  68.5  71.0  76.8  
Average  42.4  47.0  47.5  50.5  43.0  42.1  53.4  53.8  56.3  
n = 4  MV  30.2  39.0  36.0  36.9  31.7  35.4  38.4  36.8  36.2 
VM  64.4  56.6  62.1  65.0  57.3  50.2  70.4  71.4  77.7  
Average  47.3  47.8  49.1  51.0  44.5  42.8  54.4  54.1  57.0 
VC3 Parameters Sensitivity and Convergence Analysis
In our CMMS, there are two tunable parameters: and . We have conducted extensive parameter sensitivity analysis on all datasets with a wide range. We vary one parameter once and fix the others as the optimal values. The results of CW (SURF), VM, AD (Alexnet) and Ar Pr are reported in Fig. 5 (a) (b). Meanwhile, to demonstrate the effectiveness of our CMMS, we also display the results of the best competitor as the dashed lines.
First, we run our CMMS as varies from 0.001 to 10.0. From Fig. 5 (a), it is observed that when the value of is much small, it may not be contributed to the improvement of performance. Whereas, with the appropriate increase of , the clustering process of target data is emphasized, and thus our CMMS can exploit the cluster structure information more effectively. We find that, when is located within a wide range , our proposal can achieve consistently optimal performance. Then, we evaluate the influence of parameter on our CMMS by varying the value from 0.001 to 10.0. It is infeasible to determine the optimal value of , since it highly depends on the domain prior knowledge of the datasets. However, we empirically find that, when is located within the range , our CMMS can obtain better classification results than the most competitive competitor. We also display the convergence analysis in Fig. 5 (c). We can see that CMMS can quickly converge within 10 iterations.
VD Semisupervised Domain Adaptation
VD1 Results in Homogeneous Setting
The averaged classification results of all methods on the OfficeCaltech10 dataset over 20 random splits are shown in Table IX. We can observe that regarding the total average accuracy, our CMMS can obtain improvement compared with the second best method TFMKLS. The results on the MSRCVOC2007 dataset over 5 random splits are shown in Table X. Some results are cited from [22]. Compared with the most comparative competitors, CMMS can achieve and improvement when the number of labeled target samples in per class is set to 2 and 4, respectively.
Task  SVM  MMDT  SHFA  CDLS  Li et al. [33]  CMMS 

AC  79.8  78.1  79.5  83.8  84.3  88.6 
AW  90.5  89.4  90.1  93.6  93.2  93.2 
CA  89.0  87.5  88.6  90.7  92.6  93.2 
CW  90.5  88.9  89.6  92.5  92.1  93.2 
WA  89.0  88.3  88.7  90.5  93.9  93.3 
WC  79.8  78.6  79.7  82.1  84.9  88.5 
Average  86.4  85.1  86.0  88.9  90.2  91.7 
Source  SVM  MMDT  SHFA  CDLS  Li et al. [33]  CMMS 

English  67.4  67.8  68.9  70.8  71.1  74.7 
French  68.3  69.1  71.2  71.2  74.4  
German  67.7  68.3  71.0  70.9  74.7  
Italian  66.5  67.5  71.7  71.5  74.6  
Average  67.4  67.6  68.5  71.2  71.2  74.6 
VD2 Results in Heterogeneous Setting
The results on the OfficeCaltech10 and Multilingual Reuters Collection datasets are listed in Table XI and Table XII. Some results are cited from [33]. We can observe that our CMMS achieves the optimal performance in terms of the average accuracy on both datasets. Specifically, compared with the best competitors, and improvements are obtained. Especially, CMMS works the best for 8 out of all 10 tasks on two datasets, which adequately confirms the excellent generalization capacity of our CMMS in the heterogeneous setting.
Vi Conclusions and Future Work
In this paper, a novel domain adaptation method named CMMS is proposed. Unlike most of existing methods that generally assign pseudolabels to target data independently, CMMS makes label prediction for target samples by the class centroid matching of source and target domains, such that the data distribution structure of two domains can be exploited. To explore the structure information of target data more thoroughly, a local manifold selflearning strategy is further introduced into CMMS, which can capture the inherent local manifold structure of target data by adaptively learning the data similarity in the projected space. The CMMS optimization problem is not convex with all variables, and thus an iterative optimization algorithm is designed to solve it, whose computational complexity and convergence are carefully analyzed. We further extend CMMS to the semisupervised scenario including both homogeneous and heterogeneous settings which are appealing and promising. Extensive experimental results on five datasets reveal that CMMS significantly outperforms the baselines and several stateoftheart methods in both unsupervised and semisupervised scenarios.
Future research will include the following: 1) Considering the computational bottleneck of CMMS optimization, we will design more efficient algorithm for the local manifold selflearning; 2) Except for the class centroids, we can introduce additional measure to represent the structure of data distribution, such as the covariance; 3) In this paper, we extend CMMS to the semisupervised scenario in a direct but effective way. In the future, more elaborate design of semisupervised methods is worth further exploration.
Acknowledgment
The authors are thankful for the financial support by the National Natural Science Foundation of China (61432008, 61472423, U1636220 and 61772524).
References
 [1] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 13451359, Oct. 2010.
 [2] Z. Guo and Z. Wang, “Crossdomain object recognition via inputoutput kernel analysis,” IEEE Trans. Image Process., vol. 22, no. 8, pp. 31083119, Aug. 2013.
 [3] A. Rozantsev, M. Salzmann, and P. Fua, “Beyond sharing weights for deep domain adaptation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 4, pp. 801814, Apr. 2019.
 [4] C.X. Ren, D.Q. Dai, K.K. Huang, and Z.R. Lai, “Transfer learning of structured representation for face recognition,” IEEE Trans. Image Process., vol. 23, no. 12, pp. 54405454, Dec. 2014.
 [5] Q. Qiu and R. Chellappa, “Compositional dictionaries for domain adaptive face recognition,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 51525165, Dec. 2015.
 [6] A. J. Ma, J. Li, P. C. Yuen, and P. Li, “Crossdomain person reidentification using domain adaptation ranking SVMs,” IEEE Trans. Image Process., vol. 24, no. 5, pp. 15991613, May 2015.
 [7] M. Long, J. Wang, G. Ding, S. J. Pan, and P. S. Yu, “Adaptation regularization: A general framework for transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 5, pp. 10761089, May 2014.
 [8] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation via transfer component analysis,” IEEE Trans. Neural Netw., vol. 22, no. 2, pp. 199210, Feb. 2011.

[9]
M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, “Transfer feature learning with joint distribution adaptation,” in
Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
, Dec. 2013, pp. 22002207.  [10] B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2012, pp. 20662073.
 [11] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars, “Unsupervised visual domain adaptation using subspace alignment,” in Proc. Int. Conf. Comput. Vis., Aus. 2013, pp. 29602967.
 [12] B. Sun, J. Feng, and K. Saenko, “Return of frustratingly easy domain adaptation,” in Proc. Amer. Assoc. Artif. Intell. Conf., 2016, pp. 20582065.
 [13] A. Gretton, K. M. Borgwardt, M. Rasch, B. Scholkopf, and A. J. Smola, “A kernel method for the twosampleproblem,” in Proc. Adv. in Neural Inf. Process. Syst., 2007, pp. 513520.
 [14] J. Macqueen, “Some methods for classification and analysis of multivariate observations,” in Proc. 5th Berkeley Symp. Math. Statist. Probab., 1967, pp. 281297.
 [15] A. Goh and R. Vidal, “Segmenting motions of different types by unsupervised manifold clustering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2007, pp. 16.
 [16] A. Goh and R. Vidal, “Clustering and dimensionality reduction on Riemannian manifolds,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2008, pp. 17.

[17]
L. K. Saul and S. T. Roweis, “Think globally, fit locally: unsupervised learning of low dimensional manifolds,”
J. Mach. Learn. Res., vol. 4, pp. 119–155, Dec. 2003.  [18] M. Belkin and P. Niyogi, “Laplacian eigenmaps and spectral techniques for embedding and clustering,” in Proc. Adv. Neural Inf. Process. Syst., Dec. 2001, pp. 585591.
 [19] F. Nie, X. Wang, and H. Huang, “Clustering and projected clustering with adaptive neighbors,” in Proc. 20th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2014, pp. 977986.
 [20] J. Hoffman, E. Rodner, J. Donahue, B. Kulis, and K. Saenko, “Asymmetric and category invariant feature transformations for domain adaptation,” Int. J. Comput. Vis., vol. 41, nos. 12, pp. 2841, 2014.
 [21] S. Herath, M. Harandi, and F. Porikli, “Learning an invariant hilbert space for domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 38453854.
 [22] W. Wang, H. Wang, Z. X. Zhang, C. Zhang, and Y. Gao, “Semisupervised domain adaptation via Fredholm integral based kernel methods,” Pattern Recognit., vol. 85, pp. 185197, Jan. 2019.
 [23] W. Li, L. Duan, D. Xu, and I. W. Tsang, “Learning with augmented features for supervised and semisupervised heterogeneous domain adaptation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 6, pp. 11341148, Jun. 2014.
 [24] Y.T. Hsieh, S.Y. Tao, Y.H. H. Tsai, Y.R. Yeh, and Y.C. F. Wang, “Recognizing heterogeneous crossdomain data via generalized joint distribution adaptation,” in Proc. IEEE Int. Conf. Multimedia Expo., Jul. 2016, pp. 16.
 [25] M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, “Transfer joint matching for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 14101417.
 [26] S. Chen, F. Zhou, and Q. Liao, “Visual domain adaptation using weighted subspace alignment,” in Proc. SPIE Int. Conf. Vis. Commun. Image Process., Nov. 2016, pp. 14.
 [27] J. Wang, W. Feng, Y. Chen, H. Yu, M. Huang, and P. S. Yu, “Visual domain adaptation with manifold embedded distribution alignment,” in ACM Multimedia Conference on Multimedia 2018 ACM Multimedia Conf. Multimedia Conf., May 2018, pp. 402410.
 [28] L. Zhang. (2019). “Transfer adaptation learning: a decade survey.” [Online]. Available: https://arxiv.xilesou.top/abs/1903.04687
 [29] S. Li, S. Song, G. Huang, and Z. Ding, “Domain invariant and class discriminative feature learning for visual domain adaptation,” IEEE Trans. Image Process., vol. 27, no. 9, pp. 42604273, Sept. 2018.
 [30] J. Liang, R. He, and T. Tan, “Aggregating randomized clusteringpromoting invariant projections for domain adaptation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 5, pp. 10271042, May 2019.
 [31] J. Liang, R. He, Z. Sun and T. Tan, “Distant Supervised Centroid Shift: A Simple and Efficient Approach to Visual Domain Adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2019, pp. 29752984.
 [32] Q. Wang and T. P. Breckon. (2019). “Unsupervised domain adaptation via structured prediction based selective pseudolabeling.” [Online]. Available: https://arxiv.xilesou.top/abs/1911.07982
 [33] J. Li, K. Lu, Z. Huang, L. Zhu, and H. Shen, “Heterogeneous domain adaptation through progressive alignment,” IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 5, pp. 13811391, May 2019.
 [34] Y.H. H. Tsai, Y.R. Yeh, and Y.C. F. Wang, “Learning crossdomain landmarks for heterogeneous domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 50815090.
 [35] M. Xiao and Y. Guo, “Feature space independent semisupervised domain adaptation via kernel matching,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 1, pp. 5466, Jan. 2015.
 [36] T. Yao, Y. Pan, C.W. Ngo, H. Li, and T. Mei, “Semisupervised domain adaptation with subspace learning for visual recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 21422150.
 [37] D. Hong, N. Yokoya, and X. Zhu, “Learning a robust local manifold representation for hyperspectral dimensionality reduction,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Ses.., vol. 10, no. 6, pp. 29602975, Jun. 2017.
 [38] K. Zhan, F. Nie, J. Wang, and Y. Yang, “Multiview consensus graph clustering,” IEEE Trans. Image Process., vol. 28, no. 3, pp. 12611270, Mar. 2019.
 [39] C. Hou, F. Nie, H. Tao, and D. Yi, “Multiview unsupervised feature selection with adaptive similarity and view weight,” IEEE Trans. Knowl. Data Eng., vol. 29, no. 9, pp. 19982011, Sept. 2017.
 [40] W. Wang, Y. Yan, F. Nie, S. Yan, and N. Sebe, “Flexible manifold learning with optimal graph for image and video representation,” IEEE Trans. Image Process., vol. 27, no. 6, pp. 26642675, Jun. 2018.
 [41] C.A. Hou, Y.H. H. Tsai, Y.R. Yeh, and Y.C. F. Wang, “Unsupervised domain adaptation with label and structural consistency,” IEEE Trans. Image Process., vol. 25, no. 12, pp. 55525562, Dec. 2016.
 [42] J. Li, M. Jing, K. Lu, L. Zhu and H. Shen, “Locality preserving joint transfer for domain adaptation,” IEEE Trans. Image Process., vol. 28, no. 12, pp. 61036115, Dec. 2019.
 [43] S. Wang, J. Lu, X. Gu, H. Du, and J. Yang, “Semisupervised linear discriminant analysis for dimension reduction and classification,” Pattern Recognit., vol. 57, pp. 179189, Sept. 2016.
 [44] K. Saenko, B. Kulis, M. Fritz, and T. Darrell, “Adapting visual category models to new domains,” in Proc. Eur. Conf. Comput. Vis., 2010, pp. 213226.
 [45] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “DeCAF: A deep convolutional activation feature for generic visual recognition,” in Proc. Int. Conf. Mach. Learn., 2014, pp. 647655.
 [46] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2017, pp. 50185027.
 [47] M.R. Amini, N. Usunier, and C. Goutte, “Learning from multiple partially observed viewsan application to multilingual text categorization,” in Proc. Adv. in Neural Inf. Process. Syst., Dec. 2009, pp. 2836.
 [48] J. Zhang, W. Li, and P. Ogunbona, “Joint geometrical and statistical alignment for visual domain adaptation,” in Proc. IEEE Conf.Comput. Vis. Pattern Recognit., Jun. 2017, pp. 18591867.
 [49] L. Duan, I. W. Tsang, and D. Xu, “Domain transfer multiple kernel learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 3, pp. 465479, Mar. 2012.
 [50] L. van der Maaten and G. Hinton, “Visualizing data using tSNE,” J. Mach. Learn. Res., vol. 9, pp. 25792605, Nov. 2008.
Comments
There are no comments yet.