Multi-source transfer learning has been proven effective when within-target labeled data is scarce. Previous work focuses primarily on exploiting domain similarities and assumes that source domains are richly or at least comparably labeled. While this strong assumption is never true in practice, this paper relaxes it and addresses challenges related to sources with diverse labeling volume and diverse reliability. The first challenge is combining domain similarity and source reliability by proposing a new transfer learning method that utilizes both source-target similarities and inter-source relationships. The second challenge involves pool-based active learning where the oracle is only available in source domains, resulting in an integrated active transfer learning framework that incorporates distribution matching and uncertainty sampling. Extensive experiments on synthetic and two real-world datasets clearly demonstrate the superiority of our proposed methods over several baselines including state-of-the-art transfer learning methods.READ FULL TEXT VIEW PDF
Traditional supervised machine learning methods share the common assumption that training data and test data are drawn from the same underlying distribution. Typically, they also require sufficient labeled training instances to construct accurate models. In practice, however, one often can only obtain limited labeled training instances. Inspired by human beings’ ability to transfer previously learned knowledge to a related task,transfer learning  addresses the challenge of data scarcity in the target domain by utilizing labeled data from other related source domain(s). Plenty of research has been done on the single-source setting [25, 22, 35]
, and it has been shown that transfer learning has a wide range of applications in areas such as sentiment analysis[3, 21]15]
, cross-lingual natural language processing, and urban computing .
In recent years, new studies are contributing to a more realistic transfer learning setting with multiple source domains, both for classification problems [33, 16, 7, 28] and regression problems [24, 31]. These approaches try to capture the diverse source-target similarities, either by measuring the distribution differences between each source and the target or by re-weighting source instances, so that only the source knowledge likely to be relevant to the target domain is transferred. However, these works assume that all sources are equally reliable; in other words, all sources have labeled data of the same or comparable quantity and quality. Nonetheless, in real-world applications, no such assumption holds. For instance, recent sources typically contain less labeled data than long-established ones, and narrower sources contain less data than broader ones. Also some sources may contain more labeling noise than others. Therefore, it is common for source tasks to exhibit diverse reliabilities. However, to the best of our knowledge, not much work has been reported in the literature to compensate for source reliability divergences. Ideally, we thrive to find sources that are relevant and reliable at the same time, as a compromise in either aspect can hurt the performance. When this is infeasible, one has to include some less reliable sources and carefully weigh the trade-off. There are two reasons why including a source that is not richly labeled is desired. First, such a source may be very informative since it is closely related to the target domain. For example, in low-resource language processing, a language that is very similar to the target language is usually also relatively low-resource but it would still be used as an important source due to its proximity. In addition, while labeled data may be scarce, unlabeled data is often easier to obtain and one can acquire labels for unlabeled data from domain experts, assuming a budget is available via active learning .
Active learning addresses the problem of data scarcity differently than transfer learning by querying new labels from an oracle for the most informative unlabeled data. A fruitful line of prior work has developed various active learning strategies [20, 14, 18]. Recently, the idea of combining transfer learning and active learning has attracted increasing attention [6, 30, 13]. Most of existing work assumes that there is plenty of labeled data in source domain(s) and one can query labels for the target domain from an oracle. However, this assumption is not always true. Since transfer learning is mainly applied in target domains with restrictions on access to data, such as sensitive private domains or novel domains that lack human knowledge to generate sufficient labeled data, often there is little or no readily-available oracle in the target domain. For instance, in the influenza diagnosis task, transfer learning can be applied to assist diagnosing a new flu strain (target domain) by exploiting historical data of related flu strains (source domains), but there may be little expertise regarding the new strain. It is, however, often possible to query labels for a selected set of unlabeled data in source domains, as these may represent previous better-studied flu strains. Task similarity, corresponding to flu-strain diagnosis or treatment in this instance, may be established via commonalities in symptoms exhibited by patients with the new or old flu strains, or via analysis of the various influenza genomes. To the best of our knowledge, active learning for multi-source transfer learning has not been deeply studied.
In this paper, we focus on a novel research problem of transfer learning with multiple sources exhibiting diverse reliabilities and different relations to the target task. We study two related tasks. The first one is how to construct a robust model to effectively transfer knowledge from unevenly reliable sources, including those with less labeled data. The second task is how to apply active learning on multiple sources to stimulate better transfer, especially in such a scenario that sources have diverse quantities and qualities. Notice that these two tasks are related but can also be independent. For instance, in the low-resource language problem mentioned above, the first task is applicable but not the second one. This is because we may have an oracle for neither the source nor the target as they may both be of extremely limited resources. On the other hand, both tasks are relevant to the influenza example.
For the first task, inspired by , we propose a peer-weighted multi-source transfer learning method (PW-MSTL) that jointly measures source proximity and reliability. The algorithm utilizes inter-source relationships, which have been less studied previously, in two ways: (i) it associates the proximity coefficients based on distribution difference with the reliability coefficients measured using peers111We define the peers of a source as the other sources weighted by inter-source relationships. to learn their target model relevance, and (ii) when a source is not confident in the prediction for a testing instance, it is allowed to query its peers. By doing so, the proposed method allows knowledge transfer among sources and effectively utilizes label-scarce sources. For the second task, we propose an active learning framework, called adaptive multi-source active transfer (AMSAT), that builds on the idea of Kernel Mean Matching (KMM) [11, 8] and uncertainty sampling  to select unlabeled instances that are the most representative and avoid information redundancy. Experimental results, shown later, demonstrate that the combination of PW-MSTL and AMSAT significantly outperforms other competing methods. The proposed methods are generic and thus can be generalized to other learning tasks.
Transfer learning : Transfer learning aims at utilizing source domain knowledge to construct a model with low generalization error in the target domain. Transfer learning with multiple sources is challenging because of distribution discrepancies and complex inter-source dependencies. Most existing work focuses on how to capture the diverse domain similarities. Due to stability and capability of measuring fine-grained similarities explicitly, ensemble approaches are widely used. In , adaptive SVM (A-SVM) is proposed to adapt several trained source SVMs and learn model importance from meta-level features based on distributions. In contrast,  used Maximum Mean Discrepancy (MMD) 
as source weights, while adding an additional manifold regularization based on a smoothness assumption of the target classifier. A more sophisticated two-stage weighting methodology based on distribution matching and conditional probability differences is presented in. However, the valuable unlabeled data is not utilized in their method. Moreover, these methods assume that all sources are equally reliable and they have the limitation that inter-source relationships are ignored.
Multi-task learning: Unlike multi-source transfer learning which focuses on finding a single hypothesis for the target task, multi-task learning  tries to improve performance on all tasks and finds a hypothesis for each task by adaptively leveraging related tasks. It is crucial to learn task relationships in multi-task learning problems, either used as priors  or learned adaptively . While this is similar to the inter-source relationships we utilized in this paper, multi-task learning does not involve measuring the proximity between a source and the target nor the trade-off between proximity and reliability.
Active learning: At the other end of the spectrum, active learning has been proven to be effective both empirically and theoretically. A new definition of sample complexity is proposed in  to show that active learning is strictly better than passive learning asymptotically. In recent years, there has been increasing interests in combining transfer learning and active learning to jointly solve the issue of insufficient labeled data. A popular representative of this family is the work by Chattopadhyay et al.  where the JO-TAL method is proposed. It is a unified framework that jointly performs active learning and transfer learning by matching Hilbert space embeddings of distributions. A key benefit of these methods is to address the cold start problem of active learning by introducing a version space prior learned from transfer learning . However, unlike our approach, these methods all assume that source data are richly labeled and only perform active learning in the target domain.
For simplicity, we consider a binary classification problem for each domain, but the methods generalize to multi-class problems and are also applicable to regression tasks. We consider the following multi-source transfer learning setting that is common in real world when one tries to study a novel target domain with extremely limited resources. We denote by [N] consecutive integers ranging from 1 to N. Suppose we are given auxiliary source domains, each contains both labeled and unlabeled data. Moreover, these sources typically have varying amount of labeled data (and thus with diverse reliabilities). Specifically, we denote by the dataset associated with the source domain. Here, is the labeled set of size , where is the labeled instance of the source and is its corresponding true label. Similarly, is the set of unlabeled data of size . For the target domain, we assume that there is no labeled data available but we have sufficient unlabeled data, denoted as where instances are in the same feature space as sources. Furthermore, for the purpose of active learning, we assume that there is a uniform-cost oracle available only in the source domains, and the conditional probabilities are at least somewhat similar among both source and target domains.
Definition 1 For a hypothesis space for instance space , the symmetric difference hypothesis space is the set of hypotheses defined as
where is the XOR function. Then the -divergence between any two distributions and is defined as
where is the measurable set of subsets of that are the support of some hypothesis in .
Now if we assume that all domains have the same amount of unlabeled data (which is a reasonable assumption since unlabeled data are usually cheap to obtain), i.e. , then we can show the following risk bound on the target domain.
Theorem 1 Let be a hypothesis space of VC-dimension d and be the empirical distributional distance between the source and the target domain, induced by the symmetric difference hypothesis space. Then, for any and any where and , the following holds with probability at least ,
where is the expected risk of h in the corresponding domain, is the sum of labeled sizes in all sources, is the ratio of labeled data in the source, and is the risk of the ideal multi-source hypothesis weighted by and .
The proof can be found in the supplementary material. By introducing a concentration factor , we replace the size of labeled data for each source with the total size in the third line in Eq.(3), resulting in a tighter bound. Suppose that the hypothesis is learned using data in the source, then the bound suggests to use peers to evaluate the reliability of while the optimal weights should consider both proximity and reliability of the source. This inspired us to propose the following method.
In this paper, we propose the idea of formulating all source domains jointly in a framework similar to the multi-task learning. Similar to the task relationships in multi-task learning, we learn the inter-source relationships for the multi-source transfer learning problem by training a source relationship matrix as follows:
where is the classifier trained on the source, is the empirical error of measured on the source and is a parameter to control the spread of the error. In other words, is a matrix such that all entries on the diagonal are 0 and the row, namely , is a distribution over all sources where denotes the probability simplex. Note that we do not require to be symmetric due to the asymmetric nature of information transferability. While classifiers trained on a more reliable source can be well transferred to a less reliable source, the reverse is not guaranteed to be true.
A key benefit of the matrix is that it measures source reliabilities directly. The bound in Eq.(3) suggests that this direct measurement of reliability gives the algorithm extra confidence in lower generalization error. Intuitively, if a classifier trained on the source has a low empirical error on the source and the distributional divergence (such as the -divergence) between the source and the target is small, then should have a low error on the target domain as well. This is shown by the second line in Eq.(3).
We therefore parametrize the source importance weights, considering both source proximity and reliability, as follows:
is the identity matrix,
is a vector measuring pairwise source-target proximities,is a scalar and is the source relationship matrix. The concentration factor
introduces an additional degree of freedom in quantifying the trade-off between proximity and reliability. Setting
amounts to weight sources based on proximity only. In the next section, we obtain the effective heuristic of specifyingin our experiments. The proximity vector
may be manually specified according to domain knowledge or estimated from data. In this paper, we adapt the Maximum Mean Discrepancy (MMD) statistic[11, 8] to estimate the distribution discrepancy. It measures the difference between the means of two distributions after mapping onto a Reproducing Kernel Hilbert Space (RKHS), avoiding the complex density estimation. Specifically, for the source, its re-weighted MMD can be written as:
where are the weights of the source aggregate data and is a feature map onto the RKHS . By applying the kernel trick, the optimization in (6) can be solved efficiently as a quadratic programming problem using interior point methods.
For label-scarce sources, the ensemble approach often fails to utilize their information effectively. This unsatisfactory performance leads us to propose using confidence and source relationships to transfer knowledge. For many standard classification methods such as SVMs and perceptrons, we can use the distance to the boundaryto measure the confidence of the classifier on the example , as in previous works . If the confidence is low, we allow the classifier to query its peers on this specific example, exploiting the source relationship matrix . Algorithm 1 summarizes our proposed method.
When an oracle is available in the source domain, we can acquire more labels for source data, especially for less reliable but target-relevant sources. This is different from traditional active learning, which tries to improve the accuracy within the target domain. The TLAS algorithm proposed in  performs the active learning on the source domain in the single-source transfer learning, by solving a biconvex optimization problem. However, their solution requires solving two quadratic programming problems alternatively at each iteration and is therefore computationally expensive, especially for multi-source problems of large scale. In this paper, we propose an efficient two-stage active learning framework. The pseudo-code is in Algorithm 2.
In the first stage, the algorithm selects the source domain for query in an adaptive manner. The idea is that while we want to explore label-scarce sources for high marginal gain in performance, we also want to exploit label-rich sources for reliable queries. Eq.(3
) suggests that the learner should prefer a uniform source labeled ratio. Therefore, we draw a Bernoulli random variablewith probability
, i.e. the Kullback-Leibler (KL) divergence between the current ratio of source labeled data and the uniform distribution. If, then the algorithm explores sources that contain less labeled data. If , the algorithm exploits sources as demonstrated in line 12 of Algorithm 2. Notice that the combined weights proposed in Eq.(5) play a crucial role here by providing a measurement of sources with unequal reliabilities.
In the second stage, the algorithm queries the most informative instance within the selected source domain. Nguyen and Smeulders  propose a Density Weighted Uncertainty Sampling criterion as:
which picks the instance that is close to the boundary and relies in a denser neighborhood. In our setting, we propose to combine the distribution matching weights in Eq.(6) and the uncertainty sampling to form the following selection criterion:
This criterion selects instances that are representative in both source and target domains while it also takes uncertainty into consideration. Notice that we solve Eq.(6) once only and store the value of . As shown in , such approach is as efficient as the base informativeness measure, i.e. uncertainty sampling. Therefore, the proposed method achieves efficiency, representativeness and minimum information overlap.
In the following experimental study we aim at two goals: (i) to evaluate the performance of PW-MSTL with sources with diverse reliabilities, and (ii) to evaluate the effectiveness of AMSAT in constructing more reliable classifiers via active learning. Due to space limitation, not all results are presented, but they show similar patterns. Unless otherwise specified, all model parameters are chosen via 5-fold cross validation and all results are averaged over experiments repeated randomly 30 times.
Synthetic dataset. We generate a synthetic data for 5 source domains and 1 target domain. The samples
are drawn from Gaussian distributions, where is the mean of the target domain, is a random fluctuation vector and is the variable controlling the proximity between each source and the target (higher indicates lower proximity). We then consider a labeling function , where is a fixed base vector, is a random fluctuation vector and is a zero-mean Gaussian noise term. We set to small values as we assume labeling functions are similar. Using different values, We generate 50 positive points and 50 negative points as training data for each source domain and additional 100 balanced testing samples for the target domain.
Spam Detection.222http://ecmlpkdd2006.org/challenge.html We use the task B challenge of the spam detection dataset from ECML PAKDD 2006 Discovery challenge. It consists of data from inboxes of 15 users and each user forms a single domain with 200 spam () examples and 200 non-spam () examples. Each example consists of approximately 150 features representing word frequencies and we reduce the dimension to 200 using the latent semantic analysis (LSA) method . Since some spam types are shared among users while others are not, these domains form a multi-source transfer learning problem if we try to utilize data from other users to build a personalized spam filter for a target user.
Sentiment Analysis.333http://www.cs.jhu.edu/~mdredze/datasets/sentiment The dataset contains product reviews from 24 domains on Amazon. We consider each domain as a binary classification task by setting reviews with rating as positive () and reviews with rating as negative (), while reviews with rating were discarded as they are ambiguous. We pick the 10 domains that each contains more than 2000 samples and for each round of experiment, we randomly draw 1000 positive reviews and 1000 negative reviews for each domain. Similar to the previous dataset, each review contains of approximately 350 features and we reduce the dimension to 200 using the same method.
For each set of experiments, we have 600 examples (100 examples per domain) for synthetic, 6000 emails (400 emails per user) for spam and 20000 reviews (2000 reviews per domain) for sentiment. We set one domain as the target and the rest as sources. For each source domain, we randomly divide the data into the labeled set and the unlabeled set, using a labeled-data fraction. For the target domain, we randomly divide it into two parts: for testing and as unlabeled training data (with the exception of synthetic as we have generated extra testing data). Note that we can set different labeled-data fractions for source domains to model diverse reliabilities.
Competing Methods. To evaluate the performance of our approach, we compare the proposed PW-MSTL with five different methods: one single-source method, one aggregate method, and three ensemble methods. Specifically, Kernel Mean Matching (KMM)  is a conventional single-source method by distribution matching. We perform KMM for each single source and report the best performance (note that this is impractical in general as it requires an oracle to know which classifier is the best). Kernel Mean Matching-Aggregate (KMM-A) is the aggregate method which performs KMM on all sources’ training data combined. For the ensemble approach, Adaptive SVM (A-SVM)  and Domain Adaptation Machine (DAM)  are two widely-adopted multi-source methods. Finally, we also compare with our own baseline , which is similar to DAM but directly uses Eq.(5) as model weights. We also compared with Transfer Component Analysis (TCA)  but the result is omitted due to its similar performance to KMM.
Setup. For all methods, we use SVM as the base model and fix . For competing methods, we mainly follow the standard procedures for model selection as explained in their respective papers. For fair comparisons, we use the same Gaussion kernel with the bandwidth set to the median pairwise distance of training data according to the median heuristic . Using other kernels yields similar results. For ensemble methods, we set the proximity weight as , where and are tuned for each dataset to control the spread of MMD as described in Section 4.2. For both PW-MSTL and , we set for all experiments and use the 0-1 loss to compute the relation matrix (See Figure 1 for results on varying ).
Results and Discussion. Table 1444Due to space limits, we randomly show results of only three cases for Spam since their overall results are similar. shows the classification accuracy of different methods on various datasets when sources contain different labeled fractions randomly chosen from the set , while Table 2 shows the results when sources have the same amount of labeled data. The method that outperforms all other methods within the column with
significance is highlighted and underlined (and thus we omit standard deviation due to space). We can observe that our proposed PW-MSTL method outperforms other methods in most cases. When source reliabilities are uneven, we get a significant improvement in the test set accuracy. By comparing results across the two tables, we can also observe that gap between our methods and other methods diminishes as source domains acquire more labeled data (more reliable).
Note that we have built two scenarios for the Synthetic dataset in Table 1. We control the proximity variable and the labeled fraction such that similar sources also contain more labeled data in case 1 while this is reversed in case 2. As a result, we observe that prior methods focusing on proximity measure obtain an accuracy boost under case 2 compared to case 1 while our methods are consistent. We also note that A-SVM and DAM perform poorly on dvd and video even when sources are balanced. This is because the distribution divergence between these two domains are very small, causing prior methods failure to utilize other less related sources effectively. We plot an incremental performance comparison on dvd in Figure 1(a), showing that DAM slowly catches up with the single-source method as labeled fractions increase.
Finally, we study the sensitivity of . Figure 1(b) and 1(c) show the variation in accuracy for some cases in Table 1 and 2 with varying over [0,1]. We can observe that similar patterns that accuracy values first increase and then decrease as increases in both cases, only that the drop in the performance is smoother in Figure 1(c). This confirms the motivation of combining proximity and reliability and the theoretical result established in this paper. Results on other datasets are similar but we omit them in the graph due to different scales.
Baselines. To the best of our knowledge, there is no existing study that can be directly applied to our setting. Therefore, to evaluate the performance of our proposed AMSAT algorithm, we compare to the following standard baselines:
(1)Random: Selects the examples to query randomly from all source domains. This is equivalent to passive learning.
(2)Uncertainty: This is a popular active learning strategy called uncertainty sampling, which picks the examples that are most uncertain or close to the decision boundary.
(3)Representative: Another widely used method that chooses the examples that are most representative according to distribution matching in Eq.(6).
(4)Proximity: Selects the source examples that are most similar to the target domain. Note that this method usually queries examples in the very few most similar sources.
Setup. Unlike traditional active learning where domains for active selection are label-scarce, we initialize our source domains with various amount of labeled data such that some sources are more richly labeled. For each set of experiments, we initialize each source domain with a labeled fraction randomly chosen from a fixed set. This same data partition is used for all comparing methods. After each query, we perform a multi-source transfer learning algorithm and record the accuracy on the test target data. The data partition is repeated randomly for 30 times and the average results are reported. For all of our experiments, we set the query budget to 10 of the total number of source examples in the dataset. For fair comparison, the selected examples are evaluated using the same transfer learning method.
Results and Discussion. The performance curve evaluated using PW-MSTL with increasing queries are plotted in Figure 2. Due to space limitation, only a small fraction of results are presented and we only show results on Sentiment here because other datasets are less interesting due to diminishing gains. However, it should be noted that our proposed AMSAT method outperforms other methods with significance in a two-tailed test in all datasets. Figure 2 shows results with initial source labeled fractions randomly chosen from , same as experiments in Table 1. We can observe that AMSAT consistently outperforms baselines at almost all stages. The only exception is that Proximity sampling performs better at the early stage when there exists a source domain that is particularly close to the target domain, as in the case of dvd in Figure 2(b). Figure 3(a) and 3(b) show examples of method performance using a different data initialization on the kitchen domain. While they show similar patterns compared to Figure 2(d) (AMSAT consistently outperforming other baselines), we observe that AMSAT has comparable ending points in two plots due to diminishing gains. This suggests that performing active learning when sources have very uneven reliabilities is hard.
For the baselines, we observe that they are quite unstable when evaluated using PW-MSTL. For example, in some domains Uncertainty sampling performs better than Random sampling as in Figure 2(d) while it is not the case in other domains such as in Figure 2(a). When evaluated using other transfer learning methods such as DAM, often other baselines outperform Random sampling as expected. To show that combining PW-MSTL and AMSAT is superior, we compare several combinations of transfer learning and active learning methods. A typical example is shown in Figure 3(c). Notice that the curves with the same active learning method are evaluated using different transfer learning methods on the same queried data. We observe that while AMSAT still performs better than baselines when evaluated using DAM, the performance gap is larger when evaluated using PW-MSTL. The combination of PW-MSTL and AMSAT significantly outperforms the rest, showing a synergy between the proposed methods.
Finally, we evaluate the effectiveness of both picking the source as line 14 in Algorithm 2 and picking the example according to Eq.(8). We add a baseline AMSAT-US which performs exactly the same as Algorithm 2 except that it picks the example according to uncertainty sampling in line 15. Therefore, the added baseline utilizes the proposed source picking strategy but not the example picking strategy. Figure 3(d) showcases the comparison among the four methods. We can observe that: (i) AMSAT-US performs better than Uncertainty, indicating the effectiveness of source picking strategy proposed; (ii) AMSAT outperforms AMSAT-US, showing the effectiveness of example picking strategy.
We study a new research problem of transfer learning with multiple sources exhibiting reliability divergences, and explore two related tasks. The contributions of this paper are: (1) we propose a novel peer-weighted multi-source transfer learning (PW-MSTL) method that makes robust predictions in the described scenario, (2) we study the problem of active learning on source domains and propose an adaptive multi-source active transfer (AMSAT) framework to improve source reliability and performance in the target domain, and (3) we demonstrate the efficacy of utilizing inter-source relationships in the multi-source transfer learning problem. Experiments on both synthetic and real world datasets demonstrated the effectiveness of our methods.
Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks 22(2), 199-210 (2011)
Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. Journal of Machine Learning Research 2(Nov), 45–66 (2001)