Scalable Greedy Algorithms for Transfer Learning

08/06/2014 ∙ by Ilja Kuzborskij, et al. ∙ Idiap Research Institute Sapienza University of Rome 0

In this paper we consider the binary transfer learning problem, focusing on how to select and combine sources from a large pool to yield a good performance on a target task. Constraining our scenario to real world, we do not assume the direct access to the source data, but rather we employ the source hypotheses trained from them. We propose an efficient algorithm that selects relevant source hypotheses and feature dimensions simultaneously, building on the literature on the best subset selection problem. Our algorithm achieves state-of-the-art results on three computer vision datasets, substantially outperforming both transfer learning and popular feature selection baselines in a small-sample setting. We also present a randomized variant that achieves the same results with the computational cost independent from the number of source hypotheses and feature dimensions. Also, we theoretically prove that, under reasonable assumptions on the source hypotheses, our algorithm can learn effectively from few examples.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Over the last few years, the visual recognition research landscape has been heavily dominated by Convolutional Neural Networks, thanks to their ability to leverage effectively over massime amount of training data

[1]. This trend dramatically confirms the widely accepted truth that any learning algorithm performs better when trained on a lot of data. This is even more true when facing noisy or “hard” problems such as large-scale recognition [2]. However, when tackling large scale recognition problems, gathering substantial training data for all classes considered might be challenging, if not almost impossible. The occurrence of real-world objects follows a long tail distribution, with few objects occurring very often, and many with few instances. Hence, for the vast majority of visual categories known to human beings, it is extremely challenging to collect training data of the order of instances. The “long tail” distribution problem was noted and studied by Salakhutdinov et al. [3], who proposed to address it by leveraging on the prior knowledge available to the learner. Indeed, learning systems are often not trained from scratch: usually they can be build on previous knowledge acquired over time on related tasks [4]. The scenario of learning from few examples by transferring from what is already known to the learner is collectively known as Transfer Learning. The target domain usually indicates the task at hand and the source domain the prior knowledge of the learner.

Most of the transfer learning algorithms proposed in the recent years focus on the object detection task (binary transfer learning), assuming access to the training data coming from both source and target domains [4]. While featuring good practical performance [5], they often demonstrate poor scalability w.r.t. the number of sources. An alternative direction, known as a Hypothesis Transfer Learning (HTL[6, 7], consists in transferring from the source hypotheses

, that is classifiers trained from them. This framework is practically very attractive 

[8, 9, 10], as it treats source hypotheses as black boxes without any regard of their inner workings.

The goal of this paper is to develop an HTL algorithm able to deal effectively and efficiently with a large number of sources, where our working definition of large is at least . Note that this order of magnitude is also the current frontier in visual classification [2]. To this end, we cast Hypothesis Transfer Learning as a problem of efficient selection and combination of source hypotheses from a large pool. We pose it as a subset selection problem building on results from the literature  [11, 12]. We present111We build upon preliminary results presented in [13]. a greedy algorithm, GreedyTL, which attains state of the art performance even with a very limited amount of data from the target domain. Morever, we also present a randomized approximate variant of GreedyTL, called GreedyTL-59, that has a complexity independent from the number of sources, with no loss in performance. Our key contribution is a -regularized variant of the Forward Regression algorithm [14]. Since our algorithm can be viewed as a feature selection algorithm as well as an hypothesis transfer learning approach, we extensively evaluate it against popular feature selection and transfer learning baselines. We empirically demonstrate that GreedyTL dominates all the baselines in most small-sample transfer learning scenarios, thus proving the critical role of regularization in our formulation. Experiments over three datasets show the power of our approach: we obtain state of the art results in tasks with up to classes, totalling million examples, with only to training examples from the target domain. We back our experimental results by proving generalization bounds showing that, under reasonable assumptions on the source hypotheses, our algorithm is able to learn effectively with very limited data.

The rest of the paper is organised as follows: after a review of the relevant literature in the field (section 2), we cast the transfer learning problem in the subset selection framework (section 3). We then define our GreedyTL, in section 4, deriving its formulation, analysing its computational complexity and its theoretical properties. Section 5 describes our experimental evaluation and discuss the related findings. We conclude with an overall discussion and presenting possible future research avenues.

2 Related Work

The problem of how to exploit prior knowledge when attempting to solve a new task with limited, if any, annotated samples is vastly researched. Previous work span from transfer learning [4] to domain adaptation [15, 16], and dataset bias [17]. Here we focus on the first. In the literature there are several transfer learning settings [16, 15, 5]. The oldest and most popular is the one assuming access to the data originating from both the source and the target domains [16, 5, 15, 18, 19, 20, 21]. There, one typically assumes that plenty of source data are available, but access to the target data is limited: for instance, we can have many unlabeled examples and only few labeled [22]. Here we focus on the Hypothesis Transfer Learning framework (HTL,  [6, 7]). It requires to have access only to source hypotheses, that is classifiers or regressors trained on the source domains. No assumptions are made on how these source hypotheses are trained, or about their inner workings: they are treated as “black boxes”, in spirit similar to classifier-generated visual descriptors such as Classemes [23] or Object-Bank [24]. Several works proposed HTL for visual learning [8, 9, 25], some exploiting more explicitly the connection with classemes-like approaches [26, 27], demonstrating an intriguing potential. Although offering scalability, HTL-based approaches proposed so far have been tested on problems with less than a few hundred of sources [9], already showing some difficulties in selecting informative sources.

Recently, the growing need to deal with large data collections [2, 28] has started to change the focus and challenges of research in transfer learning. Scalability with respect to the amount of data and the ability to identify and separate informative sources from those carrying noise for the task at hand have become critical issues. Some attempts have been made in this direction. For example,  [29, 30] used taxonomies to leverage learning from few examples on the SUN09 dataset. In [29], authors attacked the transfer learning problem on the SUN09 dataset by using additional data from another dataset. Zero-shot approaches were investigated by [31]

on a subset of the Imagenet dataset. Large-scale visual detection has been explored by 

[30]. However, all these approaches assume access to all source training data. A slightly different approach to transfer learning that aimed to cirumvent this limitation, is reuse of a large convolutional neural network pre-trained on a large visual recognition dataset. The simplest approach is to use outputs of intermediate layers of such a network, such as DeCAF [1]

or Caffe 

[32]. A more sophisticated way of reuse is fine-tuning, a kind of warm-start, that has been successfully exploited in visual detection [33] and domain adaptation [34, 35].

In many of these works the use of richer sources of information has been supported by an increase in the information available in the target domain as well. From an intuitive point of view, this corresponds to having more data points than dimensions. Of course, this makes the learning and selection process easier, but in many applications it is not a reasonable hypothesis. Also, none of the proposed algorithms has a theoretical backing.

While not explicitly mentioned before, the problem outlined above can also be viewed as a learning scenario where the number of features is by far larger than the number of training examples. Indeed, learning with classeme-like features [23, 24] when only few training examples are available can be seen as a Hypothesis Transfer Learning

problem. Clearly, a pure empirical risk minimization would fail due to severe overfitting. In machine learning and statistics this is known as a feature selection problem, and is usually addressed by constraining or penalizing the solution with sparsity-inducing norms. One important sparsity constraint is a non-convex

pseudo-norm constraint , that simply corresponds to choosing up to

non-zero components of a vector

. One usually resorts to the subset selection methods, and greedy algorithms for obtaining solutions under this constraint [11, 36, 12, 37]. However, in some problems introducing constraint might be computationally difficult. There, a computationally easier alternative is a convex relaxation of , the regularization. Empirical error minimization with

penalty with various loss functions (for square loss is known as Lasso) has many favorable properties and is well studied theoretically 

[38]. Yet, penalty is known to suffer from several limitations, one of which is poor empirical performance when there are many correlated features. Perhaps the most famous way to resolve this issue is an elastic net regularization which is a weighted mixture of and squared penalties [14]. Since our work partially falls into the category of feature selection, we have extensively evaluated the aforementioned baselines in our task. As it will be shown below, none of them achieves competitive performances compared to our approach.

3 Transfer Learning through Subset Selection

Definitions. We will denote with small and capital bold letters respectively column vectors and matrices, e.g. and . The subvector of with rows indexed by set is , while the square submatrix of with rows and columns indexed by set is . For , the support of is . Denoting by and respectively the input and output space of the learning problem, the training set is

, drawn i.i.d. from the probability distribution

defined over . We will focus on the binary classification problem so , and, without loss of generality, .

To measure the accuracy of a learning algorithm, we have a non-negative loss function , which measures the cost incurred predicting instead of . In particular, we will focus on the square loss, , for its appealing computational properties. The risk of a hypothesis , with respect to the probability distribution , is then defined as , while the empirical risk given a training set is . Whenever the hypothesis is a linear predictor, that is, , we will also use risk notation as and .


Source Selection. Assume, that we are given a finite source hypothesis set and the training set . As in previous works [39, 9, 26], we consider the target hypothesis to be of the form

(1)

where and are found by the learning procedure. The essential parameter here is , that is the one controlling the influence of each source hypothesis. Previous works in transfer learning have focused on finding such that it minimizes the error on the training set, subject to some condition on . In particular, [9] proposed to minimize the leave-one-out error w.r.t. , subject to , which is known to improve generalization for the right choice of  [6]. A slightly different approach is to use regularization for this purpose [9], that induces solutions with most of the coefficients equal to , thus assuming that the optimal is sparse.

In this work we embrace a weaker assumption, namely, there exist up to sources that collectively improve the generalization on the target domain. Thus, we pose the problem of the Source Selection as a minimization of the regularized empirical risk on the target training set, while constraining the number of selected source hypotheses.

-Source Selection.

Given the training set we have the optimal target hypothesis by solving,

(2)

Notably, the problem (2) is a special case of the Subset Selection problem [11]: choose a subset of size from the observation variables, which collectively give the best prediction on the variable of interest. However, the Subset Selection problem is NP-hard [11]. In practice we can resort to algorithms generating approximate solutions, for many of which we have approximation guarantees. Hence, due to the extensive practical and theoretical results, we will treat the -Source Selection as a Subset Selection problem, building atop of existing guarantees.

We note that our formulation, (2), differs from the classical subset selection for the fact that it is -regularized. This technical modification makes an essential practical and theoretical difference and it is the crucial part of our algorithm. First, regularization is known to improve the generalization ability of empirical risk minimization. Second, we show that regularization also improves the quality of the approximate solution in situations when the sources, or features, are correlated. At the same time, the experimental evaluation corroborates our theoretical findings: Our formulation substantially outperforms standard subset selection, feature selection algorithms, and transfer learning baselines.

4 Greedy Algorithm for -Source Selection

In this section we state the algorithm proposed in this work, GreedyTL222Source code is available at http://idiap.ch/~ikuzbor/. In the following we will denote by the index set of all available source hypotheses and features, and by , the index set of selected ones.

GreedyTL.

Let and

be the zero-mean unit-variance training set,

, source hypothesis set, and and , regularization parameters. Then, denote and , where and select set of size as follows: (I) Initialize and . (II) Keep populating with , that maximize , as long as and is non-empty.

In this basic formulation, the algorithm requires to invert a -by- matrix at each iteration of a greedy search. Clearly, this naive approach gets prohibitive with the growth of the number of source hypotheses, feature dimensions, and desired subset size, since its computational complexity would be in

. However, we note that in transfer learning one typically assumes that training set is much smaller than number of sources and feature dimension. For this reason we apply rank-one updates w.r.t. the dual solution of regularized subset selection, so that the size of the inverted matrix does not change. The computational complexity then improves to

. We present the pseudocode of such a variant of our algorithm, GreedyTL with Rank-One Updates in Algorithm 1. The computational complexity of the operations is shown at the end of each line.

1: examples formed from features and source predictions,
2: – labels,
3:

hyperparameters.

4: – target predictor.
5: All candidates
6: Selected sources and features
7:
8:
9:while  and  do
10:     
11:
12:       Computing :
13:       Computing score of :
14:     
15:     
16:     
17:     
18:     
19:end while
20:
21:
Algorithm 1 GreedyTL with Rank-One Updates

Derivation of the Algorithm. We derive GreedyTL by extending the well known Forward Regression (FR) algorithm [11], which gives an approximation to the subset selection problem, the problem of our interest. FR is known to find a good approximation as far as features are uncorrelated [11]. In the following, we build upon FR by introducing a Tikhonov () regularization into the formulation. The purpose of regularization is twofold: first, it improves the generalization ability of the empirical risk minimization, and second, it makes the algorithm more robust to the feature correlations, thus opting to find better approximate solution.

First, we briefly formalize the subset selection problem. In a subset selection problem one tries to achieve a good prediction accuracy on the predictorrandom variable , given a linear combination of a subset of the observation random variables . The least squares subset selection then reads as

Now denote the covariance matrix of zero-mean unit-variance observation random variables by (a correlation matrix), and the correlations between and as . Note that the zero-mean unit-variance assumption will be necessary to prove the theoretical guarantees of our algorithm. By virtue of the analytic solution to least-squares and using the introduced notation, we can also state the equivalent Subset Selection problem: . However, our goal is to obtain the solution to (2), or a -regularized subset selection. Similarly to the unregularized subset selection, it is easy to get that (2) is equivalent to As said above, the Subset Selection problem is NP-hard, however, there are several ways to approximate it in practice [36]. We choose FR for this task for its simplicity, appealing computational properties and provably good approximation guarantees. Now, to apply FR to our problem, all we have to do is to provide it with normalized matrix instead of .



Approximated Randomized Greedy Algorithm. As mentioned above, the complexity of GreedyTL is linear in , the number of features and the size of the source hypothesis set. In particular, the search in for the index to add to is responsible for the dependency on . Here we show how to approximate this search with a randomized strategy. We will use the following Theorem.

Theorem 1 ([40](Theorem 6.33)).

Denote by a set of cardinality , and by a random subset of size . Then the probability that is greater or equal than elements of is at least .

The surprising consequence is that, in order to approximate the maximum over a set, we can use a random subset of size . In particular, if we want to obtain results in the percentile range with confidence, we use333Note that the formula for in [40] contains an error, the correct one is the one we report. . Practically, if we desire values that are better than

of all other estimates with

probability, then samples are sufficient. This rule is commonly called the 59-trick and it has been widely used to speed-up a wide range of algorithms with negligible loss of accuracy, e.g. [41, 42]. Indeed, as we will show in Section 5.4, we virtually don’t lose any accuracy using this strategy.

With the 59-trick, the search in becomes a search for the maximum over a random set of size 59. So, the overall complexity is reduced to , that is independent from all the quantities that are expected to be big.



Theoretical Guarantees. We now focus on the analysis of the generalization properties of GreedyTL for solving -Source Selection problem (2). Throughout this paragraph we will consider a truncated target predictor , with . We will also use big-O notation to indicate the supression of a logarithmic factor, in other words, is a short notation for . First we state the bound on the risk of an approximate solution returned by GreedyTL. 444Proofs for theorems can be found in the appendix.

Theorem 2.

Let GreedyTL generate the solution , given the training set , source hypotheses with , hyperparameters and . Then with high probability,

where

This results in a generalization bound which tells us how close the performance of the algorithm on the test set will be to the one on the training set. The key quantity here is , which captures the quality of the sources selected by the algorithm. To understand its impact, assume that . The bound has two terms, a fast one of the order of and a slow one of the order . When goes to infinity and the slow term will dominate the convergence rate, giving us a rate of the order of . If the slow term completely disappears, giving us a so called fast rate of convergence of . On the other hand, for any finite of the order of , we still have a rate of the order of . Hence, the quantity will govern the finite sample and asymptotic behavior of the algorithm, predicting a faster convergence in both regimes when it is small. In other words, when the source and target tasks are similar, TL facilitates a faster convergence of the empirical risk to the risk. A similar behavior was already observed in [6, 7].

However, one might ask what happens when the selected sources are providing bad predictions. Since , due to truncation, the empirical risk converges to the risk at the standard rate , the same one we would have without any transfering from the sources classifiers.

We now present another result that upper bounds the difference between the risk of solution of the algorithm and the empirical risk of the optimal solution to the -Source Selection problem.

Theorem 3.

In addition to conditions of Theorem 2, let be the optimal solution to (2). Given a sample correlation matrix , assume that , and . Then with high probability,

where

To analyze the implications of Theorem 3, let us consider few interesting cases. Similarly as done before, the quantity captures how well the source hypotheses are aligned with the target task and governs the asymptotic and finite sample regime. In fact, assume for any finite that there is at least one source hypothesis with small empirical risk, in particular, in , and set . Then we have that that is we get the generalization bound as if we are able to solve the original NP-hard problem in (2). In other words, if there are useful source hypotheses, we expect our algorithm to perform similarly to the one that identifies the optimal subset. This might seem surprising, but it is important to note that we do not actually care about identifying the correct subset of source hypotheses. We only care about how well the returned solution is able to generalize. On the other hand, if not even one source hypothesis has low risk, selecting the best subset of sources becomes meaningless. In this scenario, we expect the selection of any subset to perform in the same way. Thus the approximation guarantee does not matter anymore.

We now state the approximation guarantees of GreedyTL used to prove Theorem 3. In the following Corollary we show how far the optimal solution to the regularized subset selection is from the approximate one found by GreedyTL.

Corollary 1.

Let and . Denote . Assume that and are normalized, and . Then, FR algorithm generates an approximate solution to the regularized subset selection problem that satisfies

Apart from being instrumental in the proof of Theorem 3, this statement also points to the secondary role of the regularization parameter : unlike in FR, we can control the quality of the approximate solution even if the features are correlated.

5 Experiments

In this section we present experiments comparing GreedyTL to several transfer learning and feature selection algorithms. As done previously, we considered the object detection task and, for all datasets, we left out one class considering it as the target class, while the remaining classes were treated as sources [9]. We repeated this procedure for every class and for every dataset at hand, and averaged the performance scores. In the following, we refer to this procedure as leave-one-class-out. We performed the evaluation for every class, reporting averaged class-balanced recognition scores.

We used subsets of Caltech-256 [43], Imagenet [2], SUN09 [28], SUN- [44]. The largest setting considered involves classes, totaling in M examples, where the number of training examples of the target domain varies from to . Our experiments aimed at verifying three claims:

  1. -regularization is important when using greedy feature selection as a transfer learning scheme.

  2. In a small-sample regime GreedyTL is more robust than alternative feature selection approaches, such as -regularization.

  3. The approximated randomized greedy algorithm improves the computational complexity of GreedyTL with no significant loss in performance.

5.1 Datasets and Features

We used the whole Caltech-256, a public subset of Imagenet containing classes, all the classes of SUN09 that have more than example, which amounts to classes, and the whole SUN- dataset containing place categories. For Caltech-256 and Imagenet, we used as features the publicly-available -dimensional SIFT-BOW descriptors, while for SUN09 we extracted -dimensional PHOG descriptors. In addition, for Imagenet and SUN-

, we also ran experiments using convolutional features extracted from DeCAF neural network 

[1].

We composed a negative class by merging held-out classes (surrogate negative class). We did so for each dataset, and we further split it into the source negative and the target negative class as respectively, for training sources and the target. The source classifiers were trained for each class in the dataset, combining all the positive examples of that class and the source negatives. On average, each source classifier was trained using examples for the Caltech-256, for Imagenet and for the SUN09 dataset. The training sets for the target task were composed by positive examples, and negative ones. Following [9], the testing set contained positive and negative examples for Caltech-256, Imagenet, and SUN-

. For the skewed SUN09 dataset we took one positive and

negative training examples, with the rest left for testing. We drew each target training and testing set randomly times, averaging the results over them.

5.2 Baselines

We chose a linear SVM to train the source classifiers [45]. This allows us to compare fairly with relevant baselines (like Lasso) and is in line with recent trends in large scale visual recognition and transfer learning [1]. The models were selected by -fold cross-validation having regularization parameter . In addition to trained source classifiers, for the Caltech-256, we also evaluated transfer from Classemes [23] and Object Bank [24], which are very similar in spirit to source classifiers. At the same time, for Imagenet, we evaluated transfer from the outputs of the final layers of the DeCAF convolutional neural network [1].

We divided the baselines into two groups - the linear transfer learning baselines that do not require access to the source data, and the feature selection baselines. We included the second group of baselines due to GreedyTL’s resemblance to a feature selection algorithm. We focus on the linear baselines, since we are essentially interested in the feature selection in high-dimensional spaces from few examples. In that scope, most feature selection algorithms, such as Lasso, are linear. In particular, amongst TL baselines we chose: No transfer: Regularized Least Squares (RLS) algorithm trained solely on the target data; Best source: indicates the performance of the best source classifier selected by its score on the testing set. This is a pseudo-indicator of what an HTL can achieve; AverageKT: obtained by averaging the predictions of all the source classifiers; RLS src+feat: RLS trained on the concatenation of feature descriptors and source classifier predictions; MultiKT : HTL algorithm by [9] selecting in (1) by minimizing the leave-one-out error subject to ; MultiKT : similar to previous, but applying the constraint ; DAM: An HTL algorithm by [46], that can handle selection from multiple source hypotheses. It was shown to perform better than the well known and similar ASVM [47] algorithm. For the feature selection baselines we selected well-established algorithms involving sparsity assumption: L1-Logistic

: Logistic regression with

penalty [14]; Elastic-Net: Logistic regression with mixture of and penalties [14]; Forward-Reg: Forward regression – a classical greedy feature selection algorithm. When comparing our algorithms to the baselines on large datasets, we also consider a Domain Adaptive Dictionary Learning baseline [48]. This baseline represents the family of dictionary learning methods for domain adaptation and transfer learning. In particular, it learns a dictionary on the source domain and adapts it to the target one. However, in our setup the only access to the source data is through the source hypotheses. Therefore, the only way to construct source features is by using the source hypotheses on the target data points.

5.3 Results

Figure 1 shows the leave-one-class-out performance. In addition, Figures (b)b, (c)c, (f)f show the performance when transferring from off-the-shelf classemes, object-bank feature descriptors, and DeCAF neural network activations.

(a) Caltech-256
(b) Caltech-256 (Classemes)
(c) Caltech256 (Object Bank)
(d) SUN09
(e) Imagenet ( classes)
(f) Imagenet (sources are DeCAF outputs, classes)
Figure 1: Performance on the Caltech-256, subsets of Imagenet (1000 classes) and SUN09 (819 classes). Averaged class-balanced accuracies in the leave-one-class-out setting.

Whenever any baseline algorithm has hyperparameters to tune, we chose the ones that minimize the leave-one-out error on the training set. In particular, we selected the regularization parameter . MultiKT and DAM have an additional hyperparameter that we call with . Kernelized algorithms were supplied with a linear kernel. Model selection for GreedyTL involves two hyperparameters, that is and . Instead of fixing , we let GreedyTL select features as long as the regularized error between two consecutive steps is larger than .

(a) Caltech-256
(b) Imagenet
Figure 2: Baselines and number of additional noise dimensions sampled from a standard distribution. Averaged class-balanced recognition accuracies in the leave-one-class-out setting.

In particular, we set , as in preliminary experiments we have not observed any gain in performance past that point. The is fixed to . Even better performance could be obtained tuning it.

We see that GreedyTL dominates TL and feature selection baselines throughout the benchmark, rarely appearing on-par, especially in the small-sample regime. In addition, on two datasets out of three, it manages to identify the source classifier subset that performs comparably or better than the Best source, that is the single best classifier selected by its performance on the testing set. The significantly stronger performance achieved by GreedyTL w.r.t. FR, on all databases and in all settings, confirms the importance of the regularization in our formulation.

Notably, GreedyTL outperforms RLS src+feat, which is equivalent to GreedyTL selecting all the sources and features. This observation points to the fact that GreedyTL successfully manages to discard irrelevant feature dimensions and sources. To investigate this important point further, we artificially add , and dimensions of pure noise sampled from a standard distribution. Figure 2 compares feature selection methods to GreedyTL in robustness to noise. Clearly, in the small-sample setting, GreedyTL is tolerant to large amount of noise, while and regularization suffer a considerable loss in performance. We also draw attention to the failure of -based feature selection methods and MultiKT with regularization to match the performance of GreedyTL.

5.4 Approximated GreedyTL

As was discussed in Section 3, the computational complexity of GreedyTL is linear in the number of source hypotheses and feature dimensions. In this section we assess empirical performance of the approximated GreedyTL, which is independent from the number of source hypotheses, implemented through the approximated greedy algorithm described at the end of Section 3. In the following we refer to this version of an algorithm as GreedyTL-59. Instead of considering all the transfer learning and feature selection baselines, we restrict the performance comparison to the strongest competitors. To show the power of highly scalable approximated GreedyTL, we focus on the largest datasets in the number of source hypotheses and feature dimensions: Imagenet and SUN-. In case of Imagenet, we consider standard SIFT-BOW features as in previous section and also DeCAF-7 convolutional features extracted from the seventh layer of the DeCAF neural network [1]. For the SUN-, we use convolutional features of Caffe network trained on the Places- dataset [49]

, which was shown to perform particularly well in the scene recognition tasks. Figure 

3 summarizes new results.

(a) Imagenet (SIFT-BOW, classes)
(b) Imagenet (DECAF-7 features, classes)
(c) SUN- (Caffe-7 features, classes)
Figure 3: Comparison of the approximated GreedyTL: GreedyTL-59 to GreedyTL with exhaustive search and most competitive baselines on three largest datasets considered in our experiments.

Surprisingly, approximated GreedyTL performs on par with the version with exhaustive search over the candidate, maintaining dominant performance in the small-sample regime on the Imagenet dataset. Yet, training timings are dramatically improved as can be seen from Table 1. In the case of SUN- dataset, however, GreedyTL performs on par with the top competitors.

GreedyTL
Training examples pos.+neg.
Imagenet (SIFT-BOW) source+dim
Imagenet (DECAF7) source+dim
SUN-397 (Caffe-7) source+dim
GreedyTL-59
Training examples pos.+neg.
Imagenet (SIFT-BOW) source+dim
Imagenet (DECAF7) source+dim
SUN-397 (Caffe-7) source+dim
Table 1: Training time in seconds for transferring to a single target class. Results are averaged over splits.

5.5 Selected Source Analysis

In this section we take a look at the source hypotheses selected by GreedyTL. In particular, we make a qualitative assessment with the goal to see if semantically related sources and targets are correlated, visualizing selected sources and the magnitude of their weights. We do so by grouping sources and targets semantically according to the WordNet [50] distance, and plotting them as matrices with columns corresponding to targets, rows to sources, and entries to the weights of the sources. Figure 4 shows such matrices for GreedyTL when evaluated on Imagenet with DECAF7 features and averaged over all splits, for positive and positive examples accordingly. First we note, that for certain supercategories there are clearly distinctive patterns, indicating cross-transfer within the same supercategory.

(a)
(b)
Figure 4: Semantic transferrability matrix for GreedyTL evaluated on Imagenet (DECAF7 features). Columns correspond to targets and rows to sources. Stronger color intensity means larger source weight. (a)a corresponds to learning from positive and negative examples, while (b)b, with positive.b

We compare those matrices to the ones originating from the strongest RLS (src+feat) baseline, Figure 5. We notice a clear difference, as semantic patterns of GreedyTL are more distinctive in a small-sample setting (2+10), while the ones of RLS (src+feat) appear hazier. We argue that this is a consequence of greedy selection procedure implemented by GreedyTL, where sources are selected incrementally, thus many coefficients correspond to zeros. Due to the formulation of RLS (src+feat), however, even if a source is less relevant, its coefficient most likely will not be exactly equal to zero.

(a)
(b)
Figure 5: Semantic transferrability matrix for RLS (src+feat) evaluated on Imagenet (DECAF7 features).

It is also instructive to compare exact GreedyTL to the approximated one. Figure 7 pictures semantic matrices for the approximated version. We note that approximated version appears to be slightly more conservative in a small-sample case (2+10), but in overall, semantic patterns seem to match, thus emphasizing the quality of the solution provided by the approximated version and empirically corroborating the theoretical motivation behind the randomized selection.

Figure 6: GreedyTL evaluated on Imagenet (DECAF7 features): a closer look at some strongly related sources and targets.
(a)
(b)
Figure 7: Semantic transferrability matrix for the approximated GreedyTL evaluated on Imagenet (DECAF7 features).

Finally, we take a closer look at some patterns of Figure  (a)a, that is in the case of learning from only positive examples. This new analysis is shown in Figure 6. We notice that even at the smaller scale, there are emergent semantic patterns.

6 Conclusions

In this work we studied the transfer learning problem involving hundreds of sources. The kind of transfer learning scenario we consider assumes no access to the source data directly, but through the use of the source hypotheses induced by them. In particular, we focused on the efficient source hypothesis selection and combination, improving the performance on the target task. We proposed a greedy algorithm, GreedyTL, capable of selecting relevant sources and feature dimensions at the same time. We verified these claims by obtaining the best results among the competing feature selection and TL algorithms, on the Imagenet, SUN09 and Caltech-256 datasets. At the same time, comparison against the non-regularized version of the algorithm clearly show the power of our intuition. We support our empirical findings by showing theoretically that under reasonable assumptions on the sources, the algorithm can learn effectively from few target examples.

Acknowledgments

This work was partially supported by the ERC grant 367076 -RoboExNovo (B.C. and I. K.).

References

  • [1] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of The 31st International Conference on Machine Learning, pages 647–655, 2014.
  • [2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In

    Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on

    , pages 248–255. IEEE, 2009.
  • [3] R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learning to share visual appearance for multiclass object detection. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1481–1488. IEEE, 2011.
  • [4] S. J. Pan and Q. Yang. A survey on transfer learning. Knowledge and Data Engineering, IEEE Transactions on, 22(10):1345–1359, 2010.
  • [5] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2066–2073. IEEE, 2012.
  • [6] I. Kuzborskij and F. Orabona. Stability and hypothesis transfer learning. In Proceedings of the 30th International Conference on Machine Learning, pages 942–950, 2013.
  • [7] S. Ben-David and R. Urner. Domain adaptation as learning with auxiliary information. New Directions in Transfer and Multi-Task - Workshop @ NIPS, 2013.
  • [8] Y. Aytar and A. Zisserman. Tabula rasa: Model transfer for object category detection. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2252–2259. IEEE, 2011.
  • [9] T. Tommasi, F. Orabona, and B. Caputo. Learning categories from few examples with multi model knowledge transfer. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36(5):928–941, 2014.
  • [10] I. Kuzborskij, F. Orabona, and B. Caputo. From N to N+1: Multiclass Transfer Incremental Learning. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 3358–3365. IEEE, 2013.
  • [11] A. Das and D. Kempe.

    Algorithms for subset selection in linear regression.

    In

    Proceedings of the fortieth annual ACM symposium on Theory of computing

    , pages 45–54. ACM, 2008.
  • [12] T. Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models. In Advances in Neural Information Processing Systems, pages 1921–1928, 2009.
  • [13] I. Kuzborskij, F. Orabona, and B. Caputo. Transfer learning through greedy subset selection. In Image Analysis and Processing - ICIAP 2015 - 18th International Conference, Proceedings, Part I, pages 3–14, 2015.
  • [14] T. Hastie, R. Tibshirani, and J. Friedman. The Elements Of Statistical Learning. Springer, 2009.
  • [15] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In Proceedings of the European Conference on Computer Vision, pages 213–226. Springer, 2010.
  • [16] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J.W. Vaughan. A theory of learning from different domains. Machine Learning, 79(1):151–175, 2010.
  • [17] A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, 2011.
  • [18] L. Duan, D. Xu, and S.-F. Chang. Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 1338–1345. IEEE, 2012.
  • [19] C.-W. Seah, I. W.-H. Tsang, and Y.-S. Ong. Healing sample selection bias by source classifier selection. In Data Mining (ICDM), 2011 IEEE 11th International Conference on, pages 577–586. IEEE, 2011.
  • [20] T. Tommasi and B. Caputo. Frustratingly easy nbnn domain adaptation. In Computer Vision (ICCV), IEEE International Conference on, 2013.
  • [21] I. Kuzborskij, F. M. Carlucci, and B. Caputo. When Naïve Bayes Nearest Neighbours Meet Convolutional Neural Networks. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on (in press), 2016.
  • [22] V. M. Patel, R. Gopalan, R. Li, and R. Chellappa. Visual domain adaptation: A survey of recent advances. Signal Processing Magazine, IEEE, 32(3):53–69, 2015.
  • [23] A. Bergamo and L. Torresani. Classemes and other classifier-based features for efficient object categorization. Pattern Analysis and Machine Intelligence, IEEE Transactions on, PP(99):1–1, 2014.
  • [24] L.-J. Li, H. Su, L. Fei-Fei, and E. P. Xing. Object bank: A high-level image representation for scene classification & semantic feature sparsification. In Advances in neural information processing systems, pages 1378–1386, 2010.
  • [25] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1717–1724. IEEE, 2014.
  • [26] L. Jie, T. Tommasi, and B. Caputo. Multiclass transfer learning from unconstrained priors. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 1863–1870. IEEE, 2011.
  • [27] N. Patricia and B. Caputo. Learning to learn, from transfer learning to domain adaptation: A unifying perspective. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1442–1449. IEEE, 2014.
  • [28] M. J. Choi, J. J. Lim, A. Torralba, and A. S. Willsky. Exploiting hierarchical context on a large database of object categories. In Computer vision and pattern recognition (CVPR), 2010 IEEE conference on, pages 129–136. IEEE, 2010.
  • [29] J. J. Lim, A. Torralba, and R. Salakhutdinov. Transfer learning by borrowing examples for multiclass object detection. In Advances in Neural Information Processing Systems 24, pages 118–126, 2011.
  • [30] A. Vezhnevets and V. Ferrari. Associative embeddings for large-scale knowledge transfer with self-assessment. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1987–1994. IEEE, 2014.
  • [31] M. Rohrbach, M. Stark, and B. Schiele. Evaluating knowledge transfer and zero-shot learning in a large-scale setting. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1641–1648. IEEE, 2011.
  • [32] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM International Conference on Multimedia, pages 675–678. ACM, 2014.
  • [33] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Region-based convolutional networks for accurate object detection and segmentation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, PP(99):1–1, 2015.
  • [34] Y. Ganin and V. S. Lempitsky.

    Unsupervised domain adaptation by backpropagation.

    In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, pages 1180–1189, 2015.
  • [35] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, pages 97–105, 2015.
  • [36] A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1057–1064. ACM, 2011.
  • [37] T. Zhang. On the consistency of feature selection using greedy least squares regression. In Journal of Machine Learning Research, pages 555–568, 2009.
  • [38] P. Bühlmann and S. Van De Geer.

    Statistics for high-dimensional data: methods, theory and applications

    .
    Springer Science & Business Media, 2011.
  • [39] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain Adaptation with Multiple Sources. In Advances in neural information processing systems, volume 21, pages 1041–1048, 2009.
  • [40] A. Smola and B. Schölkopf. Learning with Kernels. MIT press, Cambridge, MA, USA, 2002.
  • [41] C. Domingo and O. Watanabe. MadaBoost: A modification of AdaBoost. In

    Proceedings of the Thirteenth Annual Conference on Computational Learning Theory (COLT 2000), June 28 - July 1, 2000, Palo Alto, California

    , pages 180–189, 2000.
  • [42] A. J. Smola and B. Schökopf. Sparse greedy matrix approximation for machine learning. In International Conference on Machine Learning, ICML ’00, pages 911–918, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc.
  • [43] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical report, Caltech, 2007.
  • [44] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In Computer vision and pattern recognition (CVPR), 2010 IEEE conference on, pages 3485–3492. IEEE, 2010.
  • [45] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874, 2008.
  • [46] L. Duan, I. W. Tsang, D. Xu, and T.-S. Chua. Domain adaptation from multiple sources via auxiliary classifiers. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 289–296. ACM, 2009.
  • [47] J. Yang, R. Yan, and A. G. Hauptmann. Cross-domain video concept detection using adaptive svms. In Proceedings of the 15th international conference on Multimedia, pages 188–197. ACM, 2007.
  • [48] Q. Qiu, V. M. Patel, P. Turaga, and R. Chellappa. Domain adaptive dictionary learning. In Proceedings of the European Conference on Computer Vision, pages 631–645. Springer, 2012.
  • [49] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva.

    Learning deep features for scene recognition using places database.

    In Advances in Neural Information Processing Systems, pages 487–495, 2014.
  • [50] G. A. Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41, 1995.
  • [51] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In Advances in Neural Information Processing Systems, pages 2199–2207, 2010.
  • [52] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of machine learning. The MIT Press, 2012.

Appendix A Proofs

In this section we present proofs of theorems. For brevity, we define , and we will consider a truncated target predictor

with . That said, we will assume that

in other words, empirical risk of truncated predictor cannot be greater, since all the labels belong to .

To prove Theorem 2 we need the following supplementary lemmas.

Lemma 1.

Let GreedyTL generate solution , given the training set , source hypotheses , and hyperparameters and . Then we have that,

and also,

Proof.

Define . For any such that we have,

(3)

We have the last inequality due to Jensen’s inequality. The fact that (5) holds for any proves the first statement.

We have the second statement from,

The last statement comes from,

(4)

Lemma 2.

Let be the optimal solution to (3), given the training set , source hypotheses , and hyperparameters and . Then, the following holds,

Proof.

Define . For any such that we have,

(5)

We have the last inequality due to Jensen’s inequality. The fact that (5) holds for any proves the statement.

Proof of Theorem 2.

To prove the statement we will use the optimistic rate Rademacher complexity bounds of [51]. In particular, we will have to do two things: upper-bound the worst-case Rademacher complexity of the hypothesis class of GreedyTL

, and upper-bound the empirical risk of members of that hypothesis class. Before proceeding, we spend a moment to define the loss class of

GreedyTL, assuring that it is consistent with the definition by [51],

(6)

Here, is the class of truncated hypotheses, is the hypothesis class of GreedyTL and is the mentioned bound on the empirical risk. We define the hypothesis class as,

In this definition we have used the fact shown in Lemma 1, that is the constraint on , which translates into a constraint on the hypothesis class. Now we are ready to analyze its complexity.

Recall that the worst case Rademacher complexity is defined as,

where is r.v., such that .

Let us focus on the analysis of empirical Rademacher complexity , that is the part inside the outer supremum. The truncation is -Lipschitz, therefore by Talagrand’s contraction lemma [52] we have that . Hence, now we proceed with an upper-bound on . Define such that . Then we have that,

(7)
(8)
(9)
(10)

To obtain (8) we have applied Cauchy-Schwartz inequality on the inner product of and , then upper-bounding norms with constraints given by definition of a class . To get (9) we have applied Jensen’s inequality w.r.t. , along with the fact that and . Next, we have bounded the norms of features and sources, recalling that by assumption, . Finally, taking supremum over (10) w.r.t. data, we obtain,

Next, we upper bound the empirical risk of the members of by Lemma 1. By plugging the bound on the , and the bound on the empirical risk of (6) into Theorem 1 in [51] we have the statement. ∎

Next we prove the approximation guarantee of a Regularized Subset Selection (RSS), Corollary 1, that is needed for proof of Theorem 3. First we note that the solution returned by FR enjoys the following guarantees in solving the Subset Selection.

Theorem 4 ([11]).

Assume that and are normalized, and for subset size . Then, the FR algorithm generates an approximate solution to the Subset Selection such that,

This theorem is instrumental in stating our corollary.

Proof of Corollary 1.

In addition to the sample coviance matrix , define also correlations . Denote . Now, suppose that is the solution found by the forward regression algorithm, given the input . So, the empirical risk that the algorithm attains is , as follows from the analytic solution to empirical risk minimization for given . In fact, we can upper-bound it right away using Theorem 4. But, recall that our goal is to upper-bound the quantity , that is the regularized empirical risk of the approximation to the RSS. This quantity is obtained via the unnormalized covariance matrix, therefore we cannot analyze it directly by Theorem 4. For this reason we rewrite it as . From Theorem 4 assumptions we then have , denote , and let be the optimal subset of size . Now we plug into Theorem 4, and proceed with algebraic transformations,

(11)
(12)

The last step is to relate to . The fact is equivalent to . Therefore we can set and obtain