Towards more Reliable Transfer Learning

07/06/2018
by   Zirui Wang, et al.
Carnegie Mellon University
0

Multi-source transfer learning has been proven effective when within-target labeled data is scarce. Previous work focuses primarily on exploiting domain similarities and assumes that source domains are richly or at least comparably labeled. While this strong assumption is never true in practice, this paper relaxes it and addresses challenges related to sources with diverse labeling volume and diverse reliability. The first challenge is combining domain similarity and source reliability by proposing a new transfer learning method that utilizes both source-target similarities and inter-source relationships. The second challenge involves pool-based active learning where the oracle is only available in source domains, resulting in an integrated active transfer learning framework that incorporates distribution matching and uncertainty sampling. Extensive experiments on synthetic and two real-world datasets clearly demonstrate the superiority of our proposed methods over several baselines including state-of-the-art transfer learning methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/02/2018

Optimal Bayesian Transfer Learning

Transfer learning has recently attracted significant research attention,...
11/09/2017

Multi-Relevance Transfer Learning

Transfer learning aims to faciliate learning tasks in a label-scarce tar...
12/31/2019

Homogeneous Online Transfer Learning with Online Distribution Discrepancy Minimization

Transfer learning has been demonstrated to be successful and essential i...
12/23/2017

Transfer Regression via Pairwise Similarity Regularization

Transfer learning methods address the situation where little labeled tra...
11/23/2017

Modelling Domain Relationships for Transfer Learning on Retrieval-based Question Answering Systems in E-commerce

In this paper, we study transfer learning for the PI and NLI problems, a...
04/16/2021

To Share or not to Share: Predicting Sets of Sources for Model Transfer Learning

In low-resource settings, model transfer can help to overcome a lack of ...
06/05/2020

Continuous Transfer Learning with Label-informed Distribution Alignment

Transfer learning has been successfully applied across many high-impact ...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Traditional supervised machine learning methods share the common assumption that training data and test data are drawn from the same underlying distribution. Typically, they also require sufficient labeled training instances to construct accurate models. In practice, however, one often can only obtain limited labeled training instances. Inspired by human beings’ ability to transfer previously learned knowledge to a related task,

transfer learning [23] addresses the challenge of data scarcity in the target domain by utilizing labeled data from other related source domain(s). Plenty of research has been done on the single-source setting [25, 22, 35]

, and it has been shown that transfer learning has a wide range of applications in areas such as sentiment analysis

[3, 21]

, computer vision

[15]

, cross-lingual natural language processing

[17], and urban computing [32].

In recent years, new studies are contributing to a more realistic transfer learning setting with multiple source domains, both for classification problems [33, 16, 7, 28] and regression problems [24, 31]. These approaches try to capture the diverse source-target similarities, either by measuring the distribution differences between each source and the target or by re-weighting source instances, so that only the source knowledge likely to be relevant to the target domain is transferred. However, these works assume that all sources are equally reliable; in other words, all sources have labeled data of the same or comparable quantity and quality. Nonetheless, in real-world applications, no such assumption holds. For instance, recent sources typically contain less labeled data than long-established ones, and narrower sources contain less data than broader ones. Also some sources may contain more labeling noise than others. Therefore, it is common for source tasks to exhibit diverse reliabilities. However, to the best of our knowledge, not much work has been reported in the literature to compensate for source reliability divergences. Ideally, we thrive to find sources that are relevant and reliable at the same time, as a compromise in either aspect can hurt the performance. When this is infeasible, one has to include some less reliable sources and carefully weigh the trade-off. There are two reasons why including a source that is not richly labeled is desired. First, such a source may be very informative since it is closely related to the target domain. For example, in low-resource language processing, a language that is very similar to the target language is usually also relatively low-resource but it would still be used as an important source due to its proximity. In addition, while labeled data may be scarce, unlabeled data is often easier to obtain and one can acquire labels for unlabeled data from domain experts, assuming a budget is available via active learning [26].

Active learning addresses the problem of data scarcity differently than transfer learning by querying new labels from an oracle for the most informative unlabeled data. A fruitful line of prior work has developed various active learning strategies [20, 14, 18]. Recently, the idea of combining transfer learning and active learning has attracted increasing attention [6, 30, 13]. Most of existing work assumes that there is plenty of labeled data in source domain(s) and one can query labels for the target domain from an oracle. However, this assumption is not always true. Since transfer learning is mainly applied in target domains with restrictions on access to data, such as sensitive private domains or novel domains that lack human knowledge to generate sufficient labeled data, often there is little or no readily-available oracle in the target domain. For instance, in the influenza diagnosis task, transfer learning can be applied to assist diagnosing a new flu strain (target domain) by exploiting historical data of related flu strains (source domains), but there may be little expertise regarding the new strain. It is, however, often possible to query labels for a selected set of unlabeled data in source domains, as these may represent previous better-studied flu strains. Task similarity, corresponding to flu-strain diagnosis or treatment in this instance, may be established via commonalities in symptoms exhibited by patients with the new or old flu strains, or via analysis of the various influenza genomes. To the best of our knowledge, active learning for multi-source transfer learning has not been deeply studied.

In this paper, we focus on a novel research problem of transfer learning with multiple sources exhibiting diverse reliabilities and different relations to the target task. We study two related tasks. The first one is how to construct a robust model to effectively transfer knowledge from unevenly reliable sources, including those with less labeled data. The second task is how to apply active learning on multiple sources to stimulate better transfer, especially in such a scenario that sources have diverse quantities and qualities. Notice that these two tasks are related but can also be independent. For instance, in the low-resource language problem mentioned above, the first task is applicable but not the second one. This is because we may have an oracle for neither the source nor the target as they may both be of extremely limited resources. On the other hand, both tasks are relevant to the influenza example.

For the first task, inspired by [18], we propose a peer-weighted multi-source transfer learning method (PW-MSTL) that jointly measures source proximity and reliability. The algorithm utilizes inter-source relationships, which have been less studied previously, in two ways: (i) it associates the proximity coefficients based on distribution difference with the reliability coefficients measured using peers111We define the peers of a source as the other sources weighted by inter-source relationships. to learn their target model relevance, and (ii) when a source is not confident in the prediction for a testing instance, it is allowed to query its peers. By doing so, the proposed method allows knowledge transfer among sources and effectively utilizes label-scarce sources. For the second task, we propose an active learning framework, called adaptive multi-source active transfer (AMSAT), that builds on the idea of Kernel Mean Matching (KMM) [11, 8] and uncertainty sampling [29] to select unlabeled instances that are the most representative and avoid information redundancy. Experimental results, shown later, demonstrate that the combination of PW-MSTL and AMSAT significantly outperforms other competing methods. The proposed methods are generic and thus can be generalized to other learning tasks.

2 Related Work

Transfer learning : Transfer learning aims at utilizing source domain knowledge to construct a model with low generalization error in the target domain. Transfer learning with multiple sources is challenging because of distribution discrepancies and complex inter-source dependencies. Most existing work focuses on how to capture the diverse domain similarities. Due to stability and capability of measuring fine-grained similarities explicitly, ensemble approaches are widely used. In [33], adaptive SVM (A-SVM) is proposed to adapt several trained source SVMs and learn model importance from meta-level features based on distributions. In contrast, [7] used Maximum Mean Discrepancy (MMD) [8]

as source weights, while adding an additional manifold regularization based on a smoothness assumption of the target classifier. A more sophisticated two-stage weighting methodology based on distribution matching and conditional probability differences is presented in

[28]. However, the valuable unlabeled data is not utilized in their method. Moreover, these methods assume that all sources are equally reliable and they have the limitation that inter-source relationships are ignored.

Multi-task learning: Unlike multi-source transfer learning which focuses on finding a single hypothesis for the target task, multi-task learning [36] tries to improve performance on all tasks and finds a hypothesis for each task by adaptively leveraging related tasks. It is crucial to learn task relationships in multi-task learning problems, either used as priors [5] or learned adaptively [19]. While this is similar to the inter-source relationships we utilized in this paper, multi-task learning does not involve measuring the proximity between a source and the target nor the trade-off between proximity and reliability.

Active learning: At the other end of the spectrum, active learning has been proven to be effective both empirically and theoretically. A new definition of sample complexity is proposed in [1] to show that active learning is strictly better than passive learning asymptotically. In recent years, there has been increasing interests in combining transfer learning and active learning to jointly solve the issue of insufficient labeled data. A popular representative of this family is the work by Chattopadhyay et al. [6] where the JO-TAL method is proposed. It is a unified framework that jointly performs active learning and transfer learning by matching Hilbert space embeddings of distributions. A key benefit of these methods is to address the cold start problem of active learning by introducing a version space prior learned from transfer learning [34]. However, unlike our approach, these methods all assume that source data are richly labeled and only perform active learning in the target domain.

3 Problem Formulation

For simplicity, we consider a binary classification problem for each domain, but the methods generalize to multi-class problems and are also applicable to regression tasks. We consider the following multi-source transfer learning setting that is common in real world when one tries to study a novel target domain with extremely limited resources. We denote by [N] consecutive integers ranging from 1 to N. Suppose we are given auxiliary source domains, each contains both labeled and unlabeled data. Moreover, these sources typically have varying amount of labeled data (and thus with diverse reliabilities). Specifically, we denote by the dataset associated with the source domain. Here, is the labeled set of size , where is the labeled instance of the source and is its corresponding true label. Similarly, is the set of unlabeled data of size . For the target domain, we assume that there is no labeled data available but we have sufficient unlabeled data, denoted as where instances are in the same feature space as sources. Furthermore, for the purpose of active learning, we assume that there is a uniform-cost oracle available only in the source domains, and the conditional probabilities are at least somewhat similar among both source and target domains.

4 Proposed Approach

4.1 Motivation

We first present the theoretical motivation for our methods. We analyze the expected target domain risk in the multi-source setting, making use of the theory of domain adaptation proved in [2, 4].

Definition 1 For a hypothesis space for instance space , the symmetric difference hypothesis space is the set of hypotheses defined as

(1)

where is the XOR function. Then the -divergence between any two distributions and is defined as

(2)

where is the measurable set of subsets of that are the support of some hypothesis in .

Now if we assume that all domains have the same amount of unlabeled data (which is a reasonable assumption since unlabeled data are usually cheap to obtain), i.e. , then we can show the following risk bound on the target domain.

Theorem 1 Let be a hypothesis space of VC-dimension d and be the empirical distributional distance between the source and the target domain, induced by the symmetric difference hypothesis space. Then, for any and any where and , the following holds with probability at least ,

(3)

where is the expected risk of h in the corresponding domain, is the sum of labeled sizes in all sources, is the ratio of labeled data in the source, and is the risk of the ideal multi-source hypothesis weighted by and .

The proof can be found in the supplementary material. By introducing a concentration factor , we replace the size of labeled data for each source with the total size in the third line in Eq.(3), resulting in a tighter bound. Suppose that the hypothesis is learned using data in the source, then the bound suggests to use peers to evaluate the reliability of while the optimal weights should consider both proximity and reliability of the source. This inspired us to propose the following method.

4.2 Peer-weighted Multi-source Transfer Learning

1:  Input: : source data; : target data; : concentration factor; : confidence tolerance; : test data size;
2:  for  do
3:     Compute by solving (6).
4:     Train a classifier on the weighted .
5:  end for
6:  Compute and as explained in Section 4.2.
7:  Compute as (5).
8:  for  do
9:     Observe testing example .
10:     for  do
11:        if  then
12:           Compute .
13:        else
14:           Compute .
15:        end if
16:     end for
17:     Predict .
18:  end for
Algorithm 1 PW-MSTL

In this paper, we propose the idea of formulating all source domains jointly in a framework similar to the multi-task learning. Similar to the task relationships in multi-task learning, we learn the inter-source relationships for the multi-source transfer learning problem by training a source relationship matrix as follows:

(4)

where is the classifier trained on the source, is the empirical error of measured on the source and is a parameter to control the spread of the error. In other words, is a matrix such that all entries on the diagonal are 0 and the row, namely , is a distribution over all sources where denotes the probability simplex. Note that we do not require to be symmetric due to the asymmetric nature of information transferability. While classifiers trained on a more reliable source can be well transferred to a less reliable source, the reverse is not guaranteed to be true.

A key benefit of the matrix is that it measures source reliabilities directly. The bound in Eq.(3) suggests that this direct measurement of reliability gives the algorithm extra confidence in lower generalization error. Intuitively, if a classifier trained on the source has a low empirical error on the source and the distributional divergence (such as the -divergence) between the source and the target is small, then should have a low error on the target domain as well. This is shown by the second line in Eq.(3).

We therefore parametrize the source importance weights, considering both source proximity and reliability, as follows:

(5)

where

is the identity matrix,

is a vector measuring pairwise source-target proximities,

is a scalar and is the source relationship matrix. The concentration factor

introduces an additional degree of freedom in quantifying the trade-off between proximity and reliability. Setting

amounts to weight sources based on proximity only. In the next section, we obtain the effective heuristic of specifying

in our experiments. The proximity vector

may be manually specified according to domain knowledge or estimated from data. In this paper, we adapt the Maximum Mean Discrepancy (MMD) statistic

[11, 8] to estimate the distribution discrepancy. It measures the difference between the means of two distributions after mapping onto a Reproducing Kernel Hilbert Space (RKHS), avoiding the complex density estimation. Specifically, for the source, its re-weighted MMD can be written as:

(6)

where are the weights of the source aggregate data and is a feature map onto the RKHS . By applying the kernel trick, the optimization in (6) can be solved efficiently as a quadratic programming problem using interior point methods.

For label-scarce sources, the ensemble approach often fails to utilize their information effectively. This unsatisfactory performance leads us to propose using confidence and source relationships to transfer knowledge. For many standard classification methods such as SVMs and perceptrons, we can use the distance to the boundary

to measure the confidence of the classifier on the example , as in previous works [18]. If the confidence is low, we allow the classifier to query its peers on this specific example, exploiting the source relationship matrix . Algorithm 1 summarizes our proposed method.

4.3 Adaptive Multi-source Active Learning

When an oracle is available in the source domain, we can acquire more labels for source data, especially for less reliable but target-relevant sources. This is different from traditional active learning, which tries to improve the accuracy within the target domain. The TLAS algorithm proposed in [12] performs the active learning on the source domain in the single-source transfer learning, by solving a biconvex optimization problem. However, their solution requires solving two quadratic programming problems alternatively at each iteration and is therefore computationally expensive, especially for multi-source problems of large scale. In this paper, we propose an efficient two-stage active learning framework. The pseudo-code is in Algorithm 2.

In the first stage, the algorithm selects the source domain for query in an adaptive manner. The idea is that while we want to explore label-scarce sources for high marginal gain in performance, we also want to exploit label-rich sources for reliable queries. Eq.(3

) suggests that the learner should prefer a uniform source labeled ratio. Therefore, we draw a Bernoulli random variable

with probability

, i.e. the Kullback-Leibler (KL) divergence between the current ratio of source labeled data and the uniform distribution. If

, then the algorithm explores sources that contain less labeled data. If , the algorithm exploits sources as demonstrated in line 12 of Algorithm 2. Notice that the combined weights proposed in Eq.(5) play a crucial role here by providing a measurement of sources with unequal reliabilities.

1:  Input: : source data; : target data; : concentration factor; : budget;
2:  for  do
3:     Compute by solving (6).
4:     Train a classifier on the weighted .
5:  end for
6:  for  do
7:     Compute .
8:     Draw a Bernoulli random variable with probability .
9:     if  then
10:        Set .
11:     else
12:        Compute as (5) and set .
13:     end if
14:     Draw from [K] with distribution .
15:     Select according to (8) and query the label for it.
16:     Update .
17:     Update .
18:     Update classifier .
19:  end for
Algorithm 2 AMSAT

In the second stage, the algorithm queries the most informative instance within the selected source domain. Nguyen and Smeulders [20] propose a Density Weighted Uncertainty Sampling criterion as:

(7)

which picks the instance that is close to the boundary and relies in a denser neighborhood. In our setting, we propose to combine the distribution matching weights in Eq.(6) and the uncertainty sampling to form the following selection criterion:

(8)

This criterion selects instances that are representative in both source and target domains while it also takes uncertainty into consideration. Notice that we solve Eq.(6) once only and store the value of . As shown in [27], such approach is as efficient as the base informativeness measure, i.e. uncertainty sampling. Therefore, the proposed method achieves efficiency, representativeness and minimum information overlap.

5 Empirical Evaluation

In the following experimental study we aim at two goals: (i) to evaluate the performance of PW-MSTL with sources with diverse reliabilities, and (ii) to evaluate the effectiveness of AMSAT in constructing more reliable classifiers via active learning. Due to space limitation, not all results are presented, but they show similar patterns. Unless otherwise specified, all model parameters are chosen via 5-fold cross validation and all results are averaged over experiments repeated randomly 30 times.

5.1 Datasets

Synthetic dataset. We generate a synthetic data for 5 source domains and 1 target domain. The samples

are drawn from Gaussian distributions

, where is the mean of the target domain, is a random fluctuation vector and is the variable controlling the proximity between each source and the target (higher indicates lower proximity). We then consider a labeling function , where is a fixed base vector, is a random fluctuation vector and is a zero-mean Gaussian noise term. We set to small values as we assume labeling functions are similar. Using different values, We generate 50 positive points and 50 negative points as training data for each source domain and additional 100 balanced testing samples for the target domain.

Spam Detection.222http://ecmlpkdd2006.org/challenge.html We use the task B challenge of the spam detection dataset from ECML PAKDD 2006 Discovery challenge. It consists of data from inboxes of 15 users and each user forms a single domain with 200 spam () examples and 200 non-spam () examples. Each example consists of approximately 150 features representing word frequencies and we reduce the dimension to 200 using the latent semantic analysis (LSA) method [10]. Since some spam types are shared among users while others are not, these domains form a multi-source transfer learning problem if we try to utilize data from other users to build a personalized spam filter for a target user.

Sentiment Analysis.333http://www.cs.jhu.edu/~mdredze/datasets/sentiment The dataset contains product reviews from 24 domains on Amazon. We consider each domain as a binary classification task by setting reviews with rating as positive () and reviews with rating as negative (), while reviews with rating were discarded as they are ambiguous. We pick the 10 domains that each contains more than 2000 samples and for each round of experiment, we randomly draw 1000 positive reviews and 1000 negative reviews for each domain. Similar to the previous dataset, each review contains of approximately 350 features and we reduce the dimension to 200 using the same method.

For each set of experiments, we have 600 examples (100 examples per domain) for synthetic, 6000 emails (400 emails per user) for spam and 20000 reviews (2000 reviews per domain) for sentiment. We set one domain as the target and the rest as sources. For each source domain, we randomly divide the data into the labeled set and the unlabeled set, using a labeled-data fraction. For the target domain, we randomly divide it into two parts: for testing and as unlabeled training data (with the exception of synthetic as we have generated extra testing data). Note that we can set different labeled-data fractions for source domains to model diverse reliabilities.

5.2 Reliable Multi-source Transfer Learning

Competing Methods. To evaluate the performance of our approach, we compare the proposed PW-MSTL with five different methods: one single-source method, one aggregate method, and three ensemble methods. Specifically, Kernel Mean Matching (KMM) [11] is a conventional single-source method by distribution matching. We perform KMM for each single source and report the best performance (note that this is impractical in general as it requires an oracle to know which classifier is the best). Kernel Mean Matching-Aggregate (KMM-A) is the aggregate method which performs KMM on all sources’ training data combined. For the ensemble approach, Adaptive SVM (A-SVM) [33] and Domain Adaptation Machine (DAM) [7] are two widely-adopted multi-source methods. Finally, we also compare with our own baseline , which is similar to DAM but directly uses Eq.(5) as model weights. We also compared with Transfer Component Analysis (TCA) [22] but the result is omitted due to its similar performance to KMM.

Setup. For all methods, we use SVM as the base model and fix . For competing methods, we mainly follow the standard procedures for model selection as explained in their respective papers. For fair comparisons, we use the same Gaussion kernel with the bandwidth set to the median pairwise distance of training data according to the median heuristic [9]. Using other kernels yields similar results. For ensemble methods, we set the proximity weight as , where and are tuned for each dataset to control the spread of MMD as described in Section 4.2. For both PW-MSTL and , we set for all experiments and use the 0-1 loss to compute the relation matrix (See Figure 1 for results on varying ).

Method Synthetic Spam Sentiment
case1 case2 user7 user8 user3 electronics toys music apparel dvd kitchen video sports book health
KMM 82.7 88.8 92.0 91.8 89.7 77.6 77.4 71.0 78.3 72.4 78.4 72.1 79.1 71.2 77.4
KMM-A 87.3 91.4 92.0 92.0 91.8 74.6 76.3 70.3 75.8 72.4 75.2 70.5 76.7 69.7 74.9
A-SVM 70.8 89.4 84.5 87.8 86.8 70.8 73.7 67.7 73.6 62.6 72.8 62.5 73.7 66.9 71.4
DAM 75.8 91.0 83.8 85.4 86.8 71.3 73.7 68.0 75.1 62.5 72.1 62.0 73.0 68.0 72.5
85.5 90.8 91.5 92.6 90.3 78.0 78.7 70.7 79,5 73.2 78.3 72.5 79.5 71.5 77.7
PW-MSTL 88.4 92.6 93.8 95.6 92.8 79.3 81.9 74.6 82.7 76.7 80.7 76.2 82.7 74.8 80.9
Table 1: Classification accuracy () on the target domain, given that source domains contain diverse {1%,5%,15%,30%} labeled data.
Method Synthetic Spam Sentiment
user7 user8 user3 electronics toys music apparel dvd kitchen video sports book health
10 KMM 87.0 89.1 91.2 90.3 75.0 74.6 68.3 75.6 70.2 75.9 69.9 75.6 68.9 74.3
KMM-A 91.1 91.3 90.7 91.0 74.8 76.5 70.2 76.8 71.3 77.6 71.6 77.7 71.3 75.4
A-SVM 89.4 88.4 91.9 89.2 77.1 78.1 69.9 78.2 68.9 79.1 69.2 78.1 70.5 77.1
DAM 89.7 89.6 90.4 91.3 77.5 79.0 69.9 79.8 69.0 79.5 68.9 78.4 71.9 77.7
90.2 89.7 92.4 92.1 77.7 78.7 69.7 78.9 73.5 79.8 72.3 78.8 70.4 77.9
PW-MSTL 91.2 92.5 94.9 93.1 79.8 81.5 73.3 81.3 76.4 82.3 75.4 81.2 74.4 80.7
50 KMM 95.6 92.6 94.0 91.8 81.6 81.7 75.0 82.2 76.9 83.0 77.5 82.8 75.3 81.2
KMM-A 97.2 91.4 93.8 94.7 80.4 82.4 74.5 82.7 77.1 83.8 76.5 82.8 76.0 79.6
A-SVM 96.4 91.5 95.2 93.4 81.7 83.4 74.7 84.3 76.0 85.4 75.3 83.3 76.0 82.1
DAM 96.6 92.7 93.1 93.2 83.5 84.5 73.4 84.4 77.3 86.7 76.5 84.8 76.8 83.6
96.6 92.9 95.2 93.5 83.6 84.7 74.4 85.0 80.4 85.9 79.4 85.7 77.0 84.1
PW-MSTL 97.2 94.5 95.7 93.7 84.8 86.4 76.9 87.2 82.0 87.6 81.3 87.3 79.8 86.4
Table 2: Classification accuracy () on the target domain, given that source domains contain the same fraction () of labeled data.
(a) Accuracy on dvd
(b) Sensitivity of (uneven sources)
(c) Sensitivity of (50% labeled sources)
Figure 1: Empirical analysis: (a) incremental accuracy on dvd of KMM, DAM & PW-MSTL, when sources have same amount of labeled data; (b) sensitivity of when sources have different amount of labeled data; (c) sensitivity of when sources all have 50% of labeled data.

Results and Discussion. Table 1444Due to space limits, we randomly show results of only three cases for Spam since their overall results are similar. shows the classification accuracy of different methods on various datasets when sources contain different labeled fractions randomly chosen from the set , while Table 2 shows the results when sources have the same amount of labeled data. The method that outperforms all other methods within the column with

significance is highlighted and underlined (and thus we omit standard deviation due to space). We can observe that our proposed PW-MSTL method outperforms other methods in most cases. When source reliabilities are uneven, we get a significant improvement in the test set accuracy. By comparing results across the two tables, we can also observe that gap between our methods and other methods diminishes as source domains acquire more labeled data (more reliable).

Note that we have built two scenarios for the Synthetic dataset in Table 1. We control the proximity variable and the labeled fraction such that similar sources also contain more labeled data in case 1 while this is reversed in case 2. As a result, we observe that prior methods focusing on proximity measure obtain an accuracy boost under case 2 compared to case 1 while our methods are consistent. We also note that A-SVM and DAM perform poorly on dvd and video even when sources are balanced. This is because the distribution divergence between these two domains are very small, causing prior methods failure to utilize other less related sources effectively. We plot an incremental performance comparison on dvd in Figure 1(a), showing that DAM slowly catches up with the single-source method as labeled fractions increase.

Finally, we study the sensitivity of . Figure 1(b) and 1(c) show the variation in accuracy for some cases in Table 1 and 2 with varying over [0,1]. We can observe that similar patterns that accuracy values first increase and then decrease as increases in both cases, only that the drop in the performance is smoother in Figure 1(c). This confirms the motivation of combining proximity and reliability and the theoretical result established in this paper. Results on other datasets are similar but we omit them in the graph due to different scales.

5.3 Multi-source Active Learning

Baselines. To the best of our knowledge, there is no existing study that can be directly applied to our setting. Therefore, to evaluate the performance of our proposed AMSAT algorithm, we compare to the following standard baselines:
(1)Random: Selects the examples to query randomly from all source domains. This is equivalent to passive learning.
(2)Uncertainty: This is a popular active learning strategy called uncertainty sampling, which picks the examples that are most uncertain or close to the decision boundary.
(3)Representative: Another widely used method that chooses the examples that are most representative according to distribution matching in Eq.(6).
(4)Proximity: Selects the source examples that are most similar to the target domain. Note that this method usually queries examples in the very few most similar sources.

Setup. Unlike traditional active learning where domains for active selection are label-scarce, we initialize our source domains with various amount of labeled data such that some sources are more richly labeled. For each set of experiments, we initialize each source domain with a labeled fraction randomly chosen from a fixed set. This same data partition is used for all comparing methods. After each query, we perform a multi-source transfer learning algorithm and record the accuracy on the test target data. The data partition is repeated randomly for 30 times and the average results are reported. For all of our experiments, we set the query budget to 10 of the total number of source examples in the dataset. For fair comparison, the selected examples are evaluated using the same transfer learning method.

Results and Discussion. The performance curve evaluated using PW-MSTL with increasing queries are plotted in Figure 2. Due to space limitation, only a small fraction of results are presented and we only show results on Sentiment here because other datasets are less interesting due to diminishing gains. However, it should be noted that our proposed AMSAT method outperforms other methods with significance in a two-tailed test in all datasets. Figure 2 shows results with initial source labeled fractions randomly chosen from , same as experiments in Table 1. We can observe that AMSAT consistently outperforms baselines at almost all stages. The only exception is that Proximity sampling performs better at the early stage when there exists a source domain that is particularly close to the target domain, as in the case of dvd in Figure 2(b). Figure 3(a) and 3(b) show examples of method performance using a different data initialization on the kitchen domain. While they show similar patterns compared to Figure 2(d) (AMSAT consistently outperforming other baselines), we observe that AMSAT has comparable ending points in two plots due to diminishing gains. This suggests that performing active learning when sources have very uneven reliabilities is hard.

(a) Toys
(b) Dvd
(c) Sports
(d) Kitchen
Figure 2: Performance comparison of active learning methods on Sentiment: initial labeled fractions randomly selected from .
(a)
(b)
(c) Electronics
(d) Book
Figure 3: Performance comparison of different labeled data initialization for Kitchen in (a) and (b); (c) Performance comparison of different combinations of transfer learning methods and active learning methods; (d) Evaluation of the proposed source picking and example picking strategies by comparing AMSAT against Uncertainty and AMSAT-US.

For the baselines, we observe that they are quite unstable when evaluated using PW-MSTL. For example, in some domains Uncertainty sampling performs better than Random sampling as in Figure 2(d) while it is not the case in other domains such as in Figure 2(a). When evaluated using other transfer learning methods such as DAM, often other baselines outperform Random sampling as expected. To show that combining PW-MSTL and AMSAT is superior, we compare several combinations of transfer learning and active learning methods. A typical example is shown in Figure 3(c). Notice that the curves with the same active learning method are evaluated using different transfer learning methods on the same queried data. We observe that while AMSAT still performs better than baselines when evaluated using DAM, the performance gap is larger when evaluated using PW-MSTL. The combination of PW-MSTL and AMSAT significantly outperforms the rest, showing a synergy between the proposed methods.

Finally, we evaluate the effectiveness of both picking the source as line 14 in Algorithm 2 and picking the example according to Eq.(8). We add a baseline AMSAT-US which performs exactly the same as Algorithm 2 except that it picks the example according to uncertainty sampling in line 15. Therefore, the added baseline utilizes the proposed source picking strategy but not the example picking strategy. Figure 3(d) showcases the comparison among the four methods. We can observe that: (i) AMSAT-US performs better than Uncertainty, indicating the effectiveness of source picking strategy proposed; (ii) AMSAT outperforms AMSAT-US, showing the effectiveness of example picking strategy.

6 Conclusion

We study a new research problem of transfer learning with multiple sources exhibiting reliability divergences, and explore two related tasks. The contributions of this paper are: (1) we propose a novel peer-weighted multi-source transfer learning (PW-MSTL) method that makes robust predictions in the described scenario, (2) we study the problem of active learning on source domains and propose an adaptive multi-source active transfer (AMSAT) framework to improve source reliability and performance in the target domain, and (3) we demonstrate the efficacy of utilizing inter-source relationships in the multi-source transfer learning problem. Experiments on both synthetic and real world datasets demonstrated the effectiveness of our methods.

References

  • [1] Balcan, M., Hanneke, S., Vaughan, J.W.: The true sample complexity of active learning. Machine learning, 80(2):111-139 (2010)
  • [2] Ben-David, S., Blitzer, J., Crammer, K., Pereira, F.: Analysis of representations for domain adaptation. In: NIPS. pp. 137-144 (2007)
  • [3] Blitzer, J., Dredze, M., Pereira, F., et al.: Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In: ACL. pp. 440-447 (2007)
  • [4] Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Wortman, J.: Learning bounds for domain adaptation. In: NIPS. pp. 129-136 (2008)
  • [5] Cavallanti, G., Cesa-Bianchi, N., Gentile, C.: Linear algorithms for online multitask classification. Journal of Machine Learning Research, 11(Oct), 2901-2934 (2010)
  • [6] Chattopadhyay, R., Fan, W., Davidson, I., Panchanathan, S., and Ye, J.P.: Joint transfer and batch-mode active learning. In: ICML. pp. 253-261 (2013)
  • [7] Duan, L.X., Tsang, I.W., Xu, D., Chua, T.: Domain adaptation from multiple sources via auxiliary classifiers. In: ICML. pp. 289-296 (2009)
  • [8] Gretton, A., Borgwardt, K.M., Rasch, M., Scholkopf, B., and Smola, A.J.: A kernel ¨ method for the two-sample-problem. In: NIPS. pp. 513-520. (2007)
  • [9] Gretton, A., Sejdinovic, D., Strathmann, H., Balakrishnan, S., Pontil, M., Fukumizu,K., Sriperumbudur, B.K.: Optimal kernelchoice for large-scale two-sample tests. In: NIPS. pp. 1205-1213 (2012)
  • [10] Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217-288 (2011)
  • [11] Huang, J.Y., Gretton, A., Borgwardt, K.M., Scholkopf, B., Smola, A.J.: Correcting ¨ sample selection bias by unlabeled data. In: NIPS. pp. 601-608. (2007)
  • [12] Huang, S.J., Chen, S.: Transfer learning with active queries from source domain. In: IJCAI. pp. 1592-1598. (2016)
  • [13] Kale, D., Ghazvininejad, M., Ramakrishna, A., He, J.R., Liu, Y.: Hierarchical active transfer learning. In: SIAM International Conference on Data Mining. pp. 514–522. (2015)
  • [14] Konyushkova, K., Sznitman, R., Fua, P.: Learning active learning from data. In: NIPS. pp. 4228-4238. (2017)
  • [15] Long, M.S., Wang, J.M., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: ICML. pp. 2208-2217. (2017)
  • [16] Luo, P., Zhuang, F.Z., Xiong, H., Xiong, Y.H., He, Q.: Transfer learning from multiple source domains via consensus regularization. In ACM conference on Information and knowledge management. pp. 103-112. (2008)
  • [17] Moon, S., Carbonell, J.: Completely heterogeneous transfer learning with attention-what and what not to transfer. In: IJCAI. pp. 2508-2514. (2017)
  • [18] Murugesan, K., Carbonell, J.: Active learning from peers. In: NIPS. pp. 7011-7020. (2017)
  • [19] Murugesan, K., Liu, H.X., Carbonell, J., Yang, Y.M.: Adaptive smoothed online multi-task learning. In: NIPS. pp. 4296-4304. (2016)
  • [20] Nguyen, H.T., Smeulders, A.: Active learning using pre-clustering. In: ICML. pp. 79. (2004)
  • [21] Pan, S.J., Ni, X., Sun, J., Yang, Q., Chen, Z.: Cross-domain sentiment classification via spectral feature alignment. ACM Transactions on Knowledge Discovery from Data (TKDD) 8(3), 12 (2014)
  • [22]

    Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks 22(2), 199-210 (2011)

  • [23] Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22(10), 1345-1359 (2010)
  • [24] Pardoe, D., Stone, P.: Boosting for regression transfer. In: ICML. pp. 863-870. (2010)
  • [25] Raina, R., Battle, A., Lee, H., Packer, B., Ng, A.Y.: Self-taught learning: transfer learning from unlabeled data. In: ICML. pp. 759-766. (2007)
  • [26] Settles, B.: Active learning literature survey. Technical report, University of Wisconsin, Madison (2010)
  • [27] Settles, B., Craven, M.: An analysis of active learning strategies for sequence labeling tasks. In: EMNLP. pp. 1070-1079. (2008)
  • [28] Sun, Q., Chattopadhyay, R., Panchanathan, S., and Ye, J.P.: A two-stage weighting framework for multi-source domain adaptation. In: NIPS. pp. 505-513. (2011)
  • [29]

    Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. Journal of Machine Learning Research 2(Nov), 45–66 (2001)

  • [30] Wang, X.Z., Huang, T.K., Schneider, J.: Active transfer learning under model shift. In: ICML. pp. 1305-1313. (2014)
  • [31] Wei, P.F., Sagarna, R., Ke, Y.P., Ong, Y.S., Goh, C.K.: Source-target similarity modelings for multi-source transfer gaussian process regression. In: ICML. pp. 3722-3731. (2017)
  • [32] Wei, Y., Zheng, Y., Yang, Q.: Transfer knowledge between cities. In: KDD. pp. 1905-1914. (2016)
  • [33] Yang, J., Yan, R., Hauptmann, A.G.: Cross domain video concept detection using adaptive svms. In: ACM international conference on Multimedia. pp. 188-197. (2007)
  • [34] Yang, L., Hanneke, S., Carbonell, J.: A theory of transfer learning with applications to active learning. Machine learning 90(2), 161-189 (2013)
  • [35] Zhang, L., Zuo, W.M., Zhang, D.: Lsdt: Latent sparse domain transfer learning for visual adaptation. IEEE Transactions on Image Processing 25(3), 1177-1191 (2016)
  • [36] Zhang, Y., Yeung, D.Y.: A regularization approach to learning task relationships in multitask learning. In: WWW. pp. 751-760. (2010)