Multi-task and Lifelong Learning of Kernels

02/21/2016 ∙ by Anastasia Pentina, et al. ∙ University of Waterloo Institute of Science and Technology Austria 0

We consider a problem of learning kernels for use in SVM classification in the multi-task and lifelong scenarios and provide generalization bounds on the error of a large margin classifier. Our results show that, under mild conditions on the family of kernels used for learning, solving several related tasks simultaneously is beneficial over single task learning. In particular, as the number of observed tasks grows, assuming that in the considered family of kernels there exists one that yields low approximation error on all tasks, the overhead associated with learning such a kernel vanishes and the complexity converges to that of learning when this good kernel is given to the learner.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

State-of-the-art machine learning algorithms are able to solve many problems sufficiently well. However, both theoretical and experimental studies have shown that in order to achieve solutions of reasonable quality they need an access to extensive amounts of training data. In contrast, humans are known to be able to learn concepts from just a few examples. A possible explanation may lie in the fact that humans are able to reuse the knowledge they have gained from previously learned tasks for solving a new one, while traditional machine learning algorithms solve tasks in isolation. This observation motivates an alternative, transfer learning approach. It is based on idea of transferring information between related learning tasks in order to improve performance.

There are various formal frameworks for transfer learning, modeling different learning scenarios. In this work we focus on two of them: the multi-task and the lifelong settings. In the multi-task scenario, the learner faces a fixed set of learning tasks simultaneously and its goal is to perform well on all of them. In the lifelong learning setting, the learner encounters a stream of tasks and its goal is to perform well on new, yet unobserved tasks.

For any transfer learning scenario to make sense (that is, to benefit from the multiplicity of tasks), there must be some kind of relatedness between the tasks. A common way to model such task relationships is through the assumption that there exists some data representation under which learning each of the tasks is relatively easy. The corresponding transfer learning methods aim at learning such a representation.

In this work we focus on the case of large-margin learning of kernels. We consider sets of tasks and families of kernels and analyze the sample complexity of finding a kernel in a kernel family that allows low expected error on average over the set of tasks (in the multi-task scenario), or in expectation with respect to some unknown task-generating probability distribution (in the lifelong scenario). We provide generalization bounds for empirical risk minimization learners for both settings. Under the assumption that the considered kernel family has finite pseudodimension, we show that by learning several tasks simultaneously the learner is guaranteed to have low estimation error with fewer training samples per task (compared to solving them independently). In particular, if there exists a kernel with low approximation error for all tasks, then, as the number of observed tasks grows, the problem of learning any specific task with respect to a family of kernels converges to learning when the learner knows a good kernel in advance - the multiplicity of tasks relieves the overhead associated with learning a kernel. Our assumption on finite pseudodimension of the kernel family is satisfied in many practical cases, like families of Gaussian kernels with a learned covariance matrix, and linear and convex combinations of a finite set of kernels (see 

[4]). We also show that this is the case for families of all sparse combinations of kernels from a large “dictionary” of kernels.

1.1 Related previous work

Multi-task and Lifelong Learning. A method for learning a common feature representation for linear predictors in the multi-task scenario was proposed in [9]. A similar idea was also used by [10] and extended to the lifelong scenario by [11]. A natural extension of representation learning approach was proposed for kernel methods in [12, 13], where the authors described a method for learning a kernel that is shared between tasks as a combination of some base kernels using maximum entropy discrimination approach. A similar approach, with additional constraints on sparsity of kernel combinations, was used by [17]. These ideas were later generalized to the case, when related tasks may use slightly different kernel combinations [14, 18], and successfully used in practical applications [15, 16].

Despite intuitive attractiveness of the possibility of automatically learning a suitable feature representation compared to learning with a fixed, perhaps high-dimensional or just irrelevant set of features, relatively little is known about its theoretical justifications. A seminal systematic theoretical study of the multi-task/lifelong learning settings was done by Baxter in [6]. There the author provided sample complexity bounds for both scenarios under the assumption that the tasks share a common optimal hypothesis class. The possible advantages of these approaches according to Baxter’s results depend on the behavior of complexity terms, which, however, due to the generality of the formulation, often can not be inferred easily given a particular setting. Therefore, studying more specific scenarios by using more intuitive complexity measures may lead to better understanding of the possible benefits of the multi-task/lifelong settings, even if, in some sense, they can be viewed as particular cases of Baxter’s result. Along that line, Maurer in [19] proved that learning a common low-dimensional representation in the case of lifelong learning of linear least-squares regression tasks is beneficial.

Multiple Kernel Learning. The problem of multiple kernel learning in the single-task scenario has been theoretically analyzed using different techniques. By using covering numbers, Srebro et al in [4] have shown generalization bounds with additive dependence on the pseudodimension of the kernel family. Another bound with multiplicative dependence on the pseudodimension was presented in [3], where the authors used Rademacher chaos complexity measure. Both results have a form , where is the pseudodimension of the kernel family and is the sample size. By carefully analyzing the growth rate of the Rademacher complexity in the case of the linear combinations of finitely many kernels with constraint on the weights, Cortes et al in [2] have improved the above results. In particular, in the case of constraints, the bound from [4] has a form , where in the total number of kernels, while the bound from [2] is . The fast rate analysis of the linear combinations of kernels using local Rademacher complexities was performed by Kloft et al in [1].

In this work we utilize techniques from [4]. It allows us to formulate results that hold for any kernel family with finite pseudodimension and not only for the case of linear combinations, though at the price of potentially suboptimal dependence on the number of kernels in the latter case. Moreover, additive dependence on the pseudodimension is especially appealing for the analysis of the multi-task and lifelong scenarios, as it allows obtaining bounds where that additional complexity term vanishes as the number of tasks grows and therefore these bounds clearly show possible advantages of transfer learning.

We start by describing the formal set up and preliminaries in Section 2.1,2.2 and providing a list of known kernel families with finite pseudodimensions, including our new result for sparse linear combinations, in 2.3. In Section 3 we provide the proof of the generalization bound for the multi-task case and extend it to the lifelong setting in Section 4. We conclude by discussion in Section 5.

2 Preliminaries

2.1 Formal Setup

Throughout the paper we denote the input space by and the output space by . We assume that the learner (both in the multi-task and the lifelong learning scenarios) has an access to tasks represented by the corresponding training sets , where each consists of i.i.d. samples from some unknown task-specific data distribution over . In addition we assume that the learner is given a family of kernel functions111A function is called a kernel, if there exist a Hilbert space and a mapping such that for all . defined on and uses the corresponding set of linear predictors for learning. Formally, for every kernel we define to be such set:

(1)

and to be the union of them: .

In the multi-task scenario the data distributions are assumed to be fixed and the goal of the learner is to identify a kernel that performs well on all of them. Therefore we would like to bound the difference between the expected error rate over the tasks:

(2)

and the corresponding empirical margin error rate:

(3)

Alternatively the learner may be interested in identifying a particular predictor for every task. If we define and , then it means finding some with low generalization error:

(4)

based on its empirical margin performance:

(5)

However, due to the following inequality, it is enough to bound the probability of large estimation error for the second case and a bound for the first one will follow immediately:

For the lifelong learning scenario we adopt the notion of task environment proposed in [6] and assume that there exists a set of possible data distributions (i.e. tasks) and that the observed tasks are sampled from it i.i.d. according to some unknown distribution . The goal of the learner is to find a kernel that would work well on future, yet unobserved tasks from the environment . Therefore we would like to bound the probability of large deviations between the expected error rate on new tasks, given by:

(6)

and the corresponding empirical margin error rate .

In order to obtain the generalization bounds in both cases we employ the technique of covering numbers.

2.2 Covering numbers and Pseudodimensions

In this subsection we describe the types of covering numbers we will need and establish their connections to pseudodimensions of kernel families.

Definition 1

A subset is called an -cover of with respect to a distance measure , if for every there exists a such that . The covering number is the size of the smallest -cover of .

To derive bounds for the multi-task setting we will use covers of with respect to metric associated with a sample :

(7)

The corresponding uniform covering number is given by considering all possible samples :

(8)

In contrast, for the lifelong learning scenario we will need covers of the kernel family with respect to a probability distribution. For any probability distribution over , we denote its projection on by and define the following distance between the kernels:

(9)

Similarly, for any set of distributions we define:

(10)

The minimal size of the corresponding -cover of a set of kernels we will denote by and the corresponding uniform covering number by by .

In order to make the guarantees given by the generalization bounds, that we provide, more intuitively appealing we state them using a natural measure of complexity of kernel families, namely, pseudodimension [4]:

Definition 2

The class pseudo-shatters the set of pairs of points if there exist thresholds such that for any there exists such that . The pseudodimension is the largest such that there exists a set of pairs pseudo-shattered by .

To do so we develop upper bounds on the covering numbers we use in terms of the pseudodimension of the kernel family . First, we prove the result for that will be used in the multi-task setting:

Lemma 1

For any set of kernels bounded by ( for all and all ) with pseudodimension the following inequality holds:

In order to prove this result, we first introduce some additional notation. For a sample we define distance between two functions:

(11)

Then the corresponding uniform covering number is:

(12)

We also define distance between kernels with respect to a sample with the corresponding uniform covering number:

In contrast, in [4] the distance between two kernels is defined based on a single sample of size :

(13)

and the corresponding covering number is . Note that this definition is in strong relation with ours: , and therefore, by Lemma 3 in [4]:

(14)

for any kernel family bounded by with pseudodimension . Now we can prove Lemma 1:

Proof (of lemma 1)

Fix . Define and . Let be an -net of with respect to . For every and every let be an -net of with respect to . Now fix some . Then there exists a kernel such that . Therefore there exists a kernel such that for every . By Lemma 1 in [4]

for some unit norm vector

for every . Therefore for we obtain that:

In addition, for every there exists such that . Finally, if we define , we obtain:

The above shows that is an -net of with respect to . Now the statement follows from (14) and the fact that for any with bounded by kernel ([4, 8]):

(15)

Analogously we develop an upper bound on the covering number , which we will use for the lifelong learning scenario:

Lemma 2

There exists a constant such that for any kernel family bounded by with pseudodimension :

(16)

The proof of this result is based on the following lemma that connects sample-based and distribution-based covers of kernel families (for the proof see Appendix 0.A):

Lemma 3

For any probability distribution over and any -bounded set of kernels with pseudo-dimension there exists a sample of size for some constant , such that for every if , then (where is the same as , but all expectations over are substituted by empirical averages over ).

Proof (of lemma 2)

Fix some set of probability distributions . For every denote a sample described by Lemma 3 by . Let be an -cover of with respect to , where and . Then the following chain of inequalities holds:

Consequently, by Lemma 3 in [4]:

(17)

. It is left to show that is an -cover of with respect to . By definition, for every there exists such that . Therefore for every :

Consequently, by Lemma 3, for all . ∎

2.3 Pseudodimensions of various families of kernels

In [4] the authors have shown the upper bounds on the pseudodimensions of some families of kernels:

  • convex or linear combinations of kernels have pseudodimension at most

  • Gaussian families with learned covariance matrix in have

  • Gaussian families with learned low-rank covariance have , where is the maximum rank of the covariance matrix

Here we extend their analysis to the case of sparse combinations of kernels.

Lemma 4

Let be kernels and let . Then:

(18)
Proof

For every kernel define a function :

(19)

and denote a set of such functions for all by . Then .

For every index set define to be a set of all linear combinations of . Then: and . Moreover, there are of possible sets of indices . Therefore can also be seen as a union of at most sets with VC-dimension at most . VC-dimension of a union of classes of VC-dimension at most is at most . The statement of the lemma is obtained by setting and . ∎

3 Multi-task Kernel Learning

We start with formulating the result using covering number :

Theorem 3.1

For any , if , we have that:

(20)
Proof

We utilize the standard 3-steps procedure (see Theorem 10.1 in [8]). If we denote:

then according to the symmetrization argument . Therefore, instead of bounding the probability of , we can bound the probability of .

Next, we define to be a set of permutations on the set such that for every and . Then .

Now we proceed with the last step - reduction to a finite class. Fix and the corresponding . Let be a -cover of with respect to and fix . By definition there exists such that , where . We can rewrite it as:

If we denote by the function in the cover corresponding to , then the following inequalities hold:

By combining them with the previous inequality we obtain that:

Now, if we define the following indicator: , then:

where

are independent random variables uniformly distributed over

. Then are independent random variables that take values between and and have zero mean. Therefore by Hoeffding’s inequality:

By noting that , we conclude the proof of Theorem 3.1. ∎

By using the same technique as for proving Theorem 3.1, we can obtain a lower bound on the difference between the empirical error rate and the expected error rate with double margin:

(21)
Theorem 3.2

For any , if , the following holds:

(22)

Now, by combining Theorems 3.13.2 and Lemma 1 we can state the final result for the multi-task scenario in terms of pseudodimensions:

Theorem 3.3

For any probability distributions over , any kernel family , bounded by with pseudodimension , and any fixed , for any , if , then, for a sample generated by :

(23)

where

(24)

Discussion: The most significant implications of this result are for the case where there exists some kernel that has low approximation error for each of the tasks (this is what makes the tasks ”related” and, therefore, the multi-task approach advantageous). In such a case, the kernel that minimizes the average error over the set of tasks is a useful kernel for each of these tasks.

  1. Maybe the first point to note about the above generalization result is that as the number of tasks () grows, while the number of examples per task () remains constant, the error bound behaves like the bound needed to learn with respect to a single kernel. That is, if a learner wishes to learn some specific task , and all the learner knows is that in the big family of kernels , there exists some useful kernel for that is also good on average over the other tasks, then the training samples from the other tasks allow the learner of to learn as if he had access to a specific good kernel .

  2. Another worthwhile consequence of the above theorem is that it shows the usefulness of an empirical risk minimization approach. Namely,

    Corollary 1

    Let be a minimizer, over , of the empirical -margin loss, . Then for any (and in particular for a minimizer over of the true -loss ):

    Proof

    The result is implied by the following chain of inequalities:

    where and follow from the above theorem and follows from the definition of an empirical risk minimizer. ∎

4 Lifelong Kernel Learning

In this section we generalize the results of the previous section to the case of lifelong learning in two steps. First, note that by using the same arguments as for proving Theorem 3.1 we can obtain a bound on the difference between and:

(25)

Therefore the only thing that is left is a bound on the difference between and .

We will use the following notation:

and proceed in a way analogous to the proof of Theorem 3.1. First, if we define:

then according to the symmetrization argument .

Now, if we define to be a set of permutations on a set , such that for all , we obtain that , if . So, the only thing that is left is reduction to a finite class.

Fix and denote by a set of kernels, such that for every there exists a such that:

(26)

Then, if is such that , then the corresponding satisfies . Therefore: