## I Introduction

Single-task learning learns different tasks separately by ignoring the intrinsic relatedness between different tasks. However, multi-task learning can well prevent this drawback by jointly measuring the interdependence between different tasks. The performance of all tasks is supposed to be improved with additional information provided by the relationship between tasks. Consider the merits of multi-task learning, it has been applied to various research areas, for example, web image search [1], video tracking [2], disease prediction [3], and relative attributes learning [4].

MTL makes the assumption that tasks have some intrinsic relatedness. Consequently, proper measurement of task relatedness will benefit the learning of tasks and improve the performance of each other. Conversely, improper relatedness measurement introduces noise and degrades the performance. Recently, researchers have given substantial attention to measuring task relatedness. Existing algorithms mainly use two methods to measure the relatedness between tasks: shared common models/parameters [5, 6, 7, 8, 9, 10] and shared common feature representations [11, 12, 13, 14, 15, 16, 17]. MTL of sharing common models/parameters (multi-task model learning) makes the assumption that models of different tasks have something in common in their parameters. MTL of sharing common feature representations (multi-task feature learning) assumes that related tasks share a subset of features to measure relatedness.

Both multi-task model learning and multi-task feature learning suffer from their own defects. They only consider one aspect of task relatedness. For example, the relatedness is directly captured in the original feature space by multi-task model learning. However, considering the noise and complexity of features in real-world datasets, task relatedness measured by the original features may not be obvious. As a result, the performance of multi-task model learning may degrade. Multi-task feature learning prevents this drawback by learning a common subset of feature representations. However, it ignores the relatedness between model parameters. We develop a new multi-task model and feature joint learning method in this paper that can successfully explore task relatedness. Our model learns a common feature space shared by different tasks in which the relatedness between tasks is maximized. Consequently, the common models can be better measured jointly.

The objective function is formulated as a non-convex problem and an alternating algorithm is proposed to optimize it. Additionally, we present sound theoretical analysis to prove the better ability of measuring task relatedness with our joint model and feature learning method. Various experimental results are reported to demonstrate the effectiveness of our proposed method ,especially on tasks with shared features or shared models.

The remainder of this paper is organised as follows. In Section II, we briefly review previous multi-task learning works. In Section III, we give a detailed derivation and optimization algorithm of our proposed method. Section IV derives a theoretical error bound to demostrate the merits of our proposed algorithm. Experimental results are reported in Section V with conclusions and future work given in Section VI.

## Ii Related work

In recent years, researchers have paid much attention to multi-task learning. Compared to single-task learning, its effectiveness has been demonstrated through theoretical analysis in many works [18, 19, 20, 21, 22]. For example, a novel inductive bias learning method was proposed by Baxter [18]. This work derived explicit bounds, demonstrating that learning multiple related tasks within an environment potentially achieves substantially better generalization than does learning a single-task. Ben-David and Schuller proposed a useful concept of task relatedness [19] to derive a better generalization of error bounds. Maurer et al. applied the dictionary learning and sparse coding to multi-task learning and introduced a generalization bound by measuring the hypothesis complexity [20]

. Ando and Zhang made assumption that all tasks shared a common structure and showed a reliable estimation of shared parameters between tasks when the number of tasks was large.

[21].With more extensive applications of multi-task learning, some single-task learning algorithms have been extended to multi-task learning framework. For example, some works extended Bayesian method into multi-task learning methods with the assumption that the models of tasks are indeed related [23, 24]

. Hierarchical Bayesian models can be learned by sharing parameters as hyperparameters at a high level. The relatedness between tasks can also be measured by deep neural networks, such as sharing nodes or layers of the network

[25]. As one of the most popular single-task learning methods, SVM has been studied in many multi-task learning works [5, 26, 27, 12, 28, 29, 30]. Jebara proposed a multi-task learning method using maximum entropy discrimination based on the large-margin SVM [12]. Zhu et al. propose an infinite latent SVM for multi-task learning [26]. It combines the large-margin idea with a nonparametric Bayesian model to discover the latent features for multi-task learning.The most difficult aspect of multi-task learning is simultaneously measuring the relatedness between tasks and keeping the individual characteristics. Multi-task model learning and multi-task feature learning are two main categories of multi-task learning methods. For multi-task model learning, Xue et al. proposed two different forms of MTL problem using a Dirichlet process based statistical model and developed efficient algorithms to solve the proposed methods [6]

. Evgeniou and Pontil introduced a multi-task learning model by minimizing a regularized objection similar to support vector machines

[5]. This work assumed that all tasks shared a mean hyperplane with a particular offset on their own. A nonparametric Bayesian model was proposed by Rai and Daume

[8] to capture task relatedness under the assumption that parameters shared a latent subspace. The dimensionality of the subspace is automatically inferred by the proposed model. For the category of multi-task feature learning, Argyriou et al. developed a convex MTL method for learning shared features between tasks [11]. The learned common features were regularized by a L21-norm to control the dimensionality of the latent feature space. Jebara proposed a general multi-task learning framework using large-margin classifiers. Three scenarios are discussed: multit-task feature learning, multi-task kernel combination and graphical multi-task model

[12]. To improve the efficiency of multi-task learning on high-dimensional problems, a novel multi-task learning method was proposed by learning low-dimensional features of tasks jointly [13].Recently, the defects of measuring task relatedness in traditional multi-task learning methods have been widely discussed. The assumptions that all tasks are related through sharing common parameters or common features are usually not suitable for real-world multi-task learning problems. Considering the defects of such assumptions, a number of works [31, 32, 33, 34] have been proposed to improve the performance of multi-task learning. For example, Kang et al. learned a shared feature representation across tasks while simultaneously clustering the tasks into different groups [31]

. Chen et al. proposed a robust multi-task learning method that learned multiple tasks jointly while simultaneously finding outlier tasks

[33]. Another robust multi-task feature learning method was proposed by Gong et al. [32]. This model was similar to the method in [33]. This work decomposed the weight matrix into two components and imposed the group Lasso penalty on both components. The group Lasso penalty was imposed on the row of the first component for capturing the shared features between relevant tasks, and the same group Lasso penalty was imposed on the column of the second component to find outlier tasks. Another work [34] proposed a dirty model for multi-task learning by utilizing a idea similar to [32, 33]. The model uses both block-sparse regularization and element-wise sparsity regularization to capture the true features used for each task. Block-sparse regularization learned the shared features across tasks, and element-wise regularization guaranteed that some features were used for some tasks but not all. These works can be divided into two categories: task clustering and outlier task finding.However, these works only consider one aspect of task relatedness: either shared features or shared parameters. In this paper, we consider the shared features and shared parameters simultaneously to overcome the problems in existing multi-task learning methods. The relatedness can be better modeled in our multi-task learning framework, especially when both feature relatedness and model relatedness exist between tasks.

## Iii Multi-task model and feature joint learning

We introduce our newly proposed multi-task learning method specifically in this section. We first show the objective optimization problem and then convert the non-convex problem into a convex formulation. An efficient optimization algorithm is given at the end of this section.

The idea of our proposed method is illustrated in Figure 1. There are three related tasks in the original feature space. However, the interdependence between them is not as strong as assumed in multi-task learning due to the noise and complexity of feature representation. It may lead to bad performances of multi-task learning in the original feature space. In our work, we transform the original feature space into a new feature space, in which different tasks are tightly related and are possible to share a common hyperplane . The specific characteristic of task is represented by an offset .

### Iii-a Non-convex objective

Suppose we have different tasks and each task is related with a dataset which can be formulated as follows:

where is the number of data samples in task . and are the corresponding feature representation and output of sample in task . In this work, we consider to learn different linear functions to predict the output given the input feature representation in each task:

(1) |

where . Single-task learning methods treat these linear functions as separate tasks and just utilize the data information from each task. Consequently, it ignores the interdependence between tasks which may provide more valuable information about the distribution of training data. Consider the drawbacks of single-task learning algorithm, multi-task learning are proposed to uncover the relatedness between tasks and gain performance improvement of all tasks. The improvement is expected to be obvious especially with a small amount of training data. The relatedness between tasks can provide more additional information in such situation.

In this work, we first learn a feature mapping matrix to get better relatedness between tasks in the new feature space:

(2) |

note . is the shared central hyperplane in the new feature space and represents the offset of task to maintain its own characteristic. The learned feature mapping matrix is supposed to guarantee the assumption that all tasks share a central hyperplane with an offset in the new feature space. With the above formulation, our objective function of multi-task learning can be formulated as:

(3) |

where and represents a vector of all ones. Noting that , we can reformulate problem (3) with as :

(4) |

The third regularization term in problem (4) denotes the square of the L2-norm of vector which aims to measure the smoothness and complexity of the central hyperplane. The second regularization term is square of the L21-norm of matrix which can be explicitly expressed as . denotes the -th row of matrix

. The L21-norm guarantees that all tasks share a subset of common features and the sparsity of shared features. The first term is the loss function which measures the error between ground truth and predicted results.

There are three main differences between our proposed formulation and the formulation proposed in [11]. First, the learning ability of feature mapping matrix has some limitation due to its orthogonal property. However, such limitations are ignored by the method proposed in [11]. It is more reasonable to share a subset of common features around

instead of a fix point at the origin. The proposed method can well prevent the limitation of orthogonal matrix

by selecting features around a more robust point . Second, the proposed method considers the task relatedness of both features and model parameters. However, the method proposed in [11] just uncovers the shared common features across tasks leading to loss of information between related models. These tasks are treated independently when learning their model parameters in the learned new feature space. Third, it is more challenging to solve an optimization problem learning both of common features and common model parameters.The proposed objective function is non-convex. To briefly show the non-convexity of the problem, we give a counter example. Assuming that all the variables are scalars, it is easy to show that proposed objective is non-convex. It is usually difficult to get an optimal solution of a non-convex objective. Instead, we convert the non-convex objective into an equivalent convex problem. And an alternating algorithm is proposed to solve it in the following sections.

### Iii-B Conversion to an equivalent convex optimization problem

For simple optimization, the non-convex optimization problem (4) is converted to an equivalent convex problem in this section.

###### Theorem 1.

The non-convex problem (4) can be equivalently converted to a convex optimization problem as follows:

s.t. |

Suppose is an optimal solution of convex problem (1), the corresponding optimal solution of non-convex problem (4) can be formulated as and the columns of

form an orthonormal basis of eigenvectors of

. Additionally, suppose (, , ) forms an optimal solution of non-convex problem (4), the corresponding optimal solution of convex problem (1) can be formulated as , and .Note that and indicates that is a positive semidefinite symmetric matrix. represents a set of vectors for some . denotes the pseudoinverse of matrix . is a diagonal matrix and the vector forms the diagonal elements.

To show the convexity of problem (1), an additional function is introduced as which can be explicitly formulated as:

(6) |

With the additional function, problem (1) is equal to minimizing the sum of additional functions plus the loss term and the term in problem (1), subjected to the trace constraints. Its rightness can be guaranteed by the equality between the constraints and the constraint . The loss term in problem (1) is the sum of loss function , which is convex for and a linear map, therefore it is convex. Additionally, the term and the trace constraint is also convex. To show the convexity of problem (1), it is sufficient to show that is convex. The details of being convex can be found in [11].

### Iii-C An optimization algorithm

An alternating optimization algorithm is proposed to optimize problem (1) corresponding to parameters and in this section. Additionally, the final optimal solution of problem (4) can be obtained according to Theorem 1.

We first optimize problem (1) with respect to parameters by fixing matrix . The optimization problem can be separated into different tasks with a fix in [11]. Comparing with the optimization of the objective in [11], the optimization of our newly proposed objective function is more challenging because of the shared parameter . It cannot be viewed as independent optimization problems. Our objective can be formulated as:

s.t. |

The loss function used in our work is a least square loss which is the same as that used in previous works. To solve problem (III-C), we introduce some additional variables. Note that which denotes a data matrix of task and the corresponding output of task is represented as . denotes the sum of amount of data points from all tasks:

Let and . denotes a block diagonal matrix and its diagonal entries are data from the tasks. denotes the outputs of all data belonging to the different tasks. Note , and .

Problem (III-C) can be reformulated as

(8) |

Note that with and

denotes an identity matrix. We can reformulate problem (

8) as a L2-norm regularized regression problem with some additional variables. Note that , and let . Then, and . and . We have(9) | ||||

note that and . Consequently, the above problem is reformulated as the following standard L2-norm regularized problem:

(10) |

The solution can be explicitly expressed as the following:

(11) |

Additionally, we need to optimize problem (1) with respect to matrix by fixing parameters . The objective can be simply formulated as the following:

## Iv Theoretical Analysis

For better understanding the merits of our method, a generalization bound of the non-convex problem (4) is analysed in this section. We first reformulate the problem by converting the two soft constraints and into hard ones as the following:

s.t. | ||||

The demonstration of the equality between problem (IV) and problem (4) can be found in [35], and , are of order . Denote that , the above problem can be formulated as follows:

s.t. | ||||

Consequently, we analyse the problem with hard constraints instead. We derive a generalization bound of the proposed problem following a similar way to that of [36] by setting :

s.t. | ||||

The loss function is supposed to satisfy the following Lipschitz-like condition, to simplify the analysis of the upper bound of the generalization error. This has been also used in [37].

###### Definition 1.

A loss function is -admissible with respect to the hypothesis class if there exists a , where denotes the set of non-negative real numbers, such that for any two hypotheses and example , the following inequality holds:

We can have:

###### Theorem 2.

Suppose is the upper bound of loss function , such that . And the loss function is -admissible corresponding to the linear function class. For any optimal solution of problem (4), by replacing the hard constraints and with soft constraints and , and for any

, we have the following results with probability of at least

:where is the empirical covariance for the observations of the -th task. Letting and , with a probability of at least , we have

###### Remark 1.

The first term in Theorem 2 is the generalization bound related to the learning of matrix and the second term corresponding to . This theoretical result demonstrates that the learning of shared hyperplane is of order and it can be better learned with more tasks. Consequently, our proposed multi-task joint learning method can perform better than single-task learning methods. Additionally, is encouraged to be larger with the constraints of and the utility of feature mapping matrix . Thus, the generalization bound of our proposed method have a faster convergence than the method proposed in [11], which demonstrates the efficiency of our method.

The proof of Theorem 2 is given in Appendix A.

## V Experiments

We show various experimental results and analyses on the experiments to demonstrate the effectiveness of our proposed multi-task learning method in this section. The comparison with several state-of-the-art multi-task learning algorithms further supports the merits of our multi-task model and feature joint learning methods (MTMF). We compare our MTMF with two single-task learning methods - L2-norm regularized regression (L2-R) and L1-norm regularized regression (L1-R), as well as five state-of-the-art multi-task learning algorithms including trace norm regularized multi-task learning (TraceMT), low rank regularized multi-task learning with sparse structure (LowRankMT) [38], convex multi-task feature learning(CMTL) [11], multi-task learning with a dirty model (MTDirty) [34], group sparse and low-rank regularized robust multi-task learning (SLMT) [33]. These five multi-task learning algorithms are representative methods of multi-task learning and the performance of them has been demonstrated to be promising on various datasets. The comparison with these methods can sufficiently demonstrate the effectiveness of our proposed MTMF. The datasets used in our experiments are School dataset ^{1}^{1}1http://ttic.uchicago.edu/ argyriou/code/., SARCOS dataset ^{2}^{2}2http://www.gaussianprocess.org/gpml/data/., Isolet dataset ^{3}^{3}3https://archive.ics.uci.edu/ml/datasets/ISOLET, and MNIST dataset ^{4}^{4}4http://yann.lecun.com/exdb/mnist/.

Measure | Training ratio | L2-R | L1-R | TraceMT | LowRankMT | CMTL | SLMT | MTDirty | MTMF |
---|---|---|---|---|---|---|---|---|---|

nMSE | |||||||||

aMSE | |||||||||

Measure | Training size | L2-R | L1-R | TraceMT | LowRankMT | CMTL | SLMT | MTDirty | MTMF |
---|---|---|---|---|---|---|---|---|---|

nMSE | |||||||||

aMSE | |||||||||

### V-a School dataset

This dataset was collected to evaluate the effectiveness of schools by Inner London Education Authority. It consists of 139 related tasks to predict the examination scores of students from 139 secondary schools. The information of each student is encoded into a binary feature vector of 27 dimensions. There are totally 15362 samples. Single-task learning methods, such as L1-R and L2-R, learn these 139 tasks independently using their own data. All multi-task learning methods aim to improve the performance of these 139 tasks by uncovering the relatedness between tasks. The experimental settings follow previous works to fairly compare their performance.

Different ratios (10%, 20%, 30% ) of training samples are randomly selected for training and the rest of samples are split into validation and test set. Consider the randomness of selection which may cause large variations in the results, we repeat all selections 10 times. All parameters are selected via the validation set. For all methods, the performance are evaluated by average mean squared error (aMSE) and normalized mean squared error (nMSE) which have been used in [32, 33]

. The aMSE can be calculated through dividing the mean squared error by the variance of target vector and the nMSE can be calculated through dividing the mean squared error by the squared norm of target vector.

Table I gives the performance of all methods on School dataset. From the table, we can conclude that all multi-task learning methods can well uncover the relationships between tasks and improve the performance comparing to single-task learning methods. Another observation is that our proposed method performs the best with different training ratios. The improvement is especially obvious with a small amount of training samples, which indicates the success of our method to learn a new feature space and the strong ability of discovering latent relatedness between tasks.

To analyse the properties of learned weight matrix and , we visualize them in Fig. 2 and Fig. 3. The results are obtained using of the training samples. The zero values are denoted as black pixels in the figures. Most of the pixels in Figure 3 are black, which reveal the sparsity of the learned matrix . A small subset of the features are shared across tasks corresponding to the 15 nonzero rows of matrix . From Figure 2, we observe that is also a sparse matrix. However, the features not used in matrix are appeared in matrix , which means that our proposed MTMF can better utilize the information of the features. If we only use matrix A, all the tasks are forced to share some of the features without the utilization of other features. The relatedness between tasks becomes closer than they really are. helps all the tasks utilize more information from the features that are not shared through the matrix . This is one of the reasons that our MTMF outperforms the CMTL.

### V-B SARCOS dataset

This dataset is used to learn the inverse dynamic of a SARCOS anthropomorphic robot arm. It aims to predict the seven joint torques using the provided 48933 samples described by a feature vector of 21 dimensions. In this experiment, we have seven tasks corresponding to predict these seven joint torques. Different amount of samples (50, 100, 150) are randomly selected as training data and 5000 samples are selected correspondingly as validation set and test set. The best parameters are selected on validation set for all methods. Consider the randomness of selection, we repeat all experiments 15 times and average performance is reported.

The comparison of experimental results between different methods is shown in Table II. Similar conclusions can be made to those of experiments on School dataset. Our proposed method can consistently outperform all other algorithms and all multi-task learning methods perform better than the two single-task learning methods. This further demonstrates the merits of multi-task learning and effectiveness of our method compared to other multi-task learning methods.

### V-C Isolet dataset

In this section, we conduct experiments on Isolet dataset from the UCI repository. It consists of 7797 pronunciation samples of the 26 English letters from 150 speakers. These speakers are split into five groups corresponding to five different tasks. The goal of each task is to predict the labels (1-26) of letters according to the pronunciation. In the experiment, labels of English letters are treated as regression values following the same setup as used in [39]. Different ratio (15%, 20%, 25%) of samples are randomly selected as training data and the rest is split into validation set and test set. All experiments are repeated 10 times and the best parameters are selected on validation set. We first reduce the dimensionality of the data to 100 using PCA.

The performance are reported in Table III. Note that the two single-task learning methods L2-R and L1-R are not tested on Isolet dataset because of the bad performance on School and SARCOS datasets. Our proposed multi-task learning method outperforms other baselines obviously on this dataset, which proves the robustness of our method on various applications.

Measure | Training ratio | TraceMT | LowRankMT | CMTL | SLMT | MTDirty | MTMF |
---|---|---|---|---|---|---|---|

nMSE | |||||||

aMSE | |||||||

### V-D MNIST dataset

We further study the effectiveness of our approach on a handwritten digit recognition dataset: MNIST. This dataset is composed of 60000 training examples and 10000 test examples. There are ten different handwritten digit numbers, corresponding to ten different binary classification tasks. Multi-way classification is treated as a multi-task learning problem, where each task is a classification task of one digit against all the other digits [31, 40]. We randomly select 500, 1000, and 1500 examples (50, 100, and 150 examples are selected from each digit number) from the 60000 training samples as training set and 1000 samples from the test samples to form the test sets. The dimensionality of images is reduced to 64 using PCA. All experiments are repeated 20 times and mean average precision (mAP) is reported.

Training size | 50 | 100 | 150 |
---|---|---|---|

TraceMT | |||

LowRankMT | |||

CMTL | |||

SLMT | |||

MTDirty | |||

MTMF |

The results on this dataset are shown in Table IV. We compare our MTMF method with five other multi-task regression learning methods. The results show that our proposed method outperforms the other five multi-task learning methods on the MNIST dataset.

### V-E Analysis on p-values

To further demonstrate that the proposed method is indeed statistically significantly better than the next best method, we present the p-values between our proposed method and the next best method in Table V. The table includes six groups of experiments on the School dataset (nMSE: 10%, 20%, 30%; aMSE: 10%, 20%, 30%), SARCOS dataset (nMSE: 50, 100, 150; aMSE:50, 100, 150), Isolet dataset (nMSE: 15%, 20%, 25%; aMSE: 15%, 20%, 25%), and three groups of experiments on the MNIST dataset (AP: 50, 100, 150). We index the experiments for all the datasets from 1 to 6. From Table V, our proposed method significantly outperforms the next best methods on the School dataset, Isolet dataset and MNIST dataset, as the p-values are substantially smaller than . On the SARCOS dataset, our method does not perform significantly better than the next best method. However, the proposed method performs much better than all other methods.

The main reason that our proposed method outperforms other multi-task learning methods is that our proposed method considers the shared features and shared parameters simultaneously. Therefore, our proposed method can perform better if the data has both feature relatedness and model relatedness. Additionally, we can balance the importance between feature relatedness and model relatedness through tradeoff parameters and . Thus our model can degenerate to just share feature representations or share model. Consequently, our proposed model is more robust to various data.

Index Number | School dataset | SARCOS dataset | Isolet dataset | MNIST dataset |
---|---|---|---|---|

1 | ||||

2 | ||||

3 | ||||

4 | - | |||

5 | - | |||

6 | - |

### V-F Sensitivity analysis on MTMF

In this section, we conduct experiments to analyze the sensitivity of our proposed MTMF method. We will mainly discuss how the regularization parameters and and the training size affect the performance of our MTMF method. All the experiments are conducted on the School dataset.

Analysis of the training ratio: In these experiments, we randomly select , and of the data as training sets and use the remaining data as test sets. We study how the training size affects the performance of MTMF. The experiments are repeated 10 times, and the regularization parameters are selected through validation. The results are shown in Figure 4. We can conclude that the proposed method outperforms the other methods significantly with consistent increase of training ratio. It is also found that the performance of multi-task learning algorithms improves quicker when having a small amount of training samples and that the performance improves only slightly when the training ratio reaches a high level. It is consistent with the learning ability of multi-task learning. The relatedness between different tasks can provide more information to each task especially when the amount of training data is small. This results in a rapid increase in performance. However, the contribution of information from other tasks will decrease when task itself has sufficient training samples, which leads to a smaller increase in performance.

Analysis of the regularization parameters: We conduct experiments on the School dataset to analyze the sensitivity of the two regularization parameters. We randomly select of the data as training set and the remaining data as test set. For the sensitivity analysis of the parameter , we fix and vary the value of as . For the parameter , we fix and vary the value of as . The results are shown in Figure 5 and Figure 6. In Figure 5, we can see that the best performance by MFJL is obtained by setting when is fixed. From Figure 6, we see that the best performance by MTMF is obtained by setting the value of as a small value. Additionally, the performance of MFJL changes slightly when the value of is in the range of . In general, MFJL performs well when the ratio reaches a relatively high value (approximately 1000). This means that only a few features will be shared across tasks and that the central hyperplane will play an important role.

## Vi Conclusions and future work

In this paper, we summarize the defects of traditional multi-task learning methods and propose a novel multi-task learning framework, which learns shared latent feature representation and shared parameters jointly. The proposed method is introduced in detail and a new algorithm for optimizing the non-convex problem is proposed. Additionally, we theoretically demonstrate the merits of the proposed method compared to single-task learning and its strong ability to measure the relatedness between tasks. We conduct various experiments on four multi-task learning datasets and the results have demonstrated the effectiveness of the proposed method.

In the future, we consider to extend the multi-task model and feature joint learning method into a more general framework. In this paper, the learned feature mapping matrix is an orthogonal matrix. It may be more efficient if the orthogonal matrix is replaced by a common matrix. Additionally, we make assumptions that all tasks share a common parameter, which is not suitable for some real-world cases. Considering this, we will attempt to automatically learn the relatedness between tasks and not make assumptions about the relatedness.

## Appendix A Proof of Theorem 2

Before we provide the proof of Theorem 2, we need to introduce some used tools. We first give an introduction to the concentration inequality [41], which is better known as Hoeffding’s inequality.

###### Theorem 3.

Let

be independent random variables with the range

for . Let . Then, for any , the following inequalities hold:We then introduce the Rademacher complexity [42], which is suitable to derive dimensionality-independent generalization bounds.

###### Definition 2.

Let be an independent distributed sample, and let be a function class on . Let

be independent Rademacher variables, which are uniformly distributed in

. The empirical Rademacher complexity is defined asThe Rademacher complexity is defined as

According to the symmetric distribution property of random variables, the following theorem [37] holds:

###### Theorem 4.

Let

Then,

Combining Theorem 4 and Hoeffding’s inequality, we have the following:

###### Theorem 5 ([37]).

Let be an -valued function class on , and . For any , with a probability of at least , we have

or

The following property of Rademacher complexity [42] will help to construct the upper bound.

###### Lemma 1.

If is Lipschitz with constant and satisfies , then

###### Lemma 2.

Let

where are Rademacher variables indexed by and . We have

where is the empirical covariance for the observations of the -th task.

*Proof.* We have

Comments

There are no comments yet.