Learning Multiple Visual Tasks while Discovering their Structure

04/13/2015 ∙ by Carlo Ciliberto, et al. ∙ MIT 0

Multi-task learning is a natural approach for computer vision applications that require the simultaneous solution of several distinct but related problems, e.g. object detection, classification, tracking of multiple agents, or denoising, to name a few. The key idea is that exploring task relatedness (structure) can lead to improved performances. In this paper, we propose and study a novel sparse, non-parametric approach exploiting the theory of Reproducing Kernel Hilbert Spaces for vector-valued functions. We develop a suitable regularization framework which can be formulated as a convex optimization problem, and is provably solvable using an alternating minimization approach. Empirical tests show that the proposed method compares favorably to state of the art techniques and further allows to recover interpretable structures, a problem of interest in its own right.



There are no comments yet.


page 11

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Several problems in computer vision and image processing, such as object detection/classification, image denoising, inpainting etc., require solving multiple learning tasks at the same time. In such settings a natural question is to ask whether it could be beneficial to solve all the tasks jointly, rather than separately. This idea is at the basis of the field of multi-task learning, where the joint solution of different problems has the potential to exploit tasks relatedness (structure) to improve learning. Indeed, when knowledge about task relatedness is available, it can be profitably incorporated in multi-task learning approaches for example by designing suitable embedding/coding schemes, kernels or regularizers, see [20, 10, 1, 11, 19].

The more interesting case, when knowledge about the tasks structure is not known a priori, has been the subject of recent studies. Largely influenced by the success of sparsity based methods, a common approach has been that of considering linear models for each task coupled with suitable parameterization/penalization enforcing task relatedness, for example encouraging the selection of features simultaneously important for all tasks [2] or for specific subgroups of related tasks [13, 14, 29, 15, 12, 16]. Other linear methods adopt hierarchical priors or greedy approaches to recover the taxonomy of tasks [22, 24]. A different line of research has been devoted to the development of non-linear/non-parametric approaches using kernel methods – either from a Gaussian process [1, 29] or a regularization perspective [1, 8].

This paper follows this last line of research, tackling in particular two issues only partially addressed in previous works. The first is the development of a regularization framework to learn and exploit the tasks structure, which is not only important for prediction, but also for interpretation. Towards this end, we propose and study a family of matrix-valued reproducing kernels, parametrized so to enforce sparse relations among tasks. A novel algorithm dubbed Sparse Kernel MTL is then proposed considering a Tikhonov regularization approach. The second contribution is to provide a sound computational framework to solve the corresponding minimization problem. While we follow a fairly standard alternating minimization approach, unlike most previous work we can exploit results in convex optimization to prove the convergence of the considered procedure. The latter has an interesting interpretation where supervised and unsupervised learning steps are alternated: first, given a structure, multiple tasks are learned jointly, then the structure is updated. We support the proposed method with an experimental analysis both on synthetic and real data, including classification and detection datasets. The obtained results show that Sparse Kernel MTL can achieve state of the art performances while unveiling the structure describing tasks relatedness.

The paper is organized as follows: in Sec. 2 we provide some background and notation in order to motivate and introduce the Sparse Kernel MTL model. In Sec. 3 we discuss an alternating minimization algorithm to provably solve the learning problem proposed. Finally, we discuss empirical evaluation in Sec. 4.

Notation. With we denote respectively the space of positive definite, positive semidefinite (PSD) and symmetric real-valued matrices. denotes the space of orthonormal matrices. For any , denotes the transpose of . For any PSD matrix , denotes the pseudoinverse of . We denote by the identity matrix. We use the abbreviation l.s.c. to denote lower semi-continuous functions (i.e. functions with closed sub-level sets) [6].

2 Model

We formulate the problem of solving multiple learning tasks as that of learning a vector-valued function whose output components correspond to individual predictors. We consider the framework originally introduced in [20] where the well-known concept of Reproducing Kernel Hilbert Space is extended to spaces of vector-valued functions. In this setting the set of tasks relations has a natural characterization in terms of a positive semidefinite matrix. By imposing a sparse prior on this object we are able to formulate our model, Sparse Kernel MTL, as a kernel learning problem designed to recover the most relevant relations among the tasks.

In the following we review basic definitions and results from the theory of Reproducing Kernel Hilbert Spaces that will allow in Sec. 2.2 to motivate and introduce our learning framework. In Sec. 2.2.2 we briefly draw connections of our method to previously prosed multi-task learning approaches.

2.1 Reproducing Kernel Hilbert Spaces for Vector-Valued Functions

We consider the problem of learning a function from a set of empirical observations with and . This setting includes learning problems such as vector-valued regression (), multi-label/detection for tasks () or also -class classification (where we adopt the standard one-vs-all approach mapping the -th class label to the -th element of the canonical basis in ). Following the work of Micchelli and Pontil [20], we adopt a Tikhonov regularization approach in the setting of Reproducing Kernel Hilbert Spaces for vector-valued functions (RKHSvv). RKHSvv are the generalization of the well-known RKHS to the vector-valued setting and maintain most of the properties of their scalar counterpart. In particular, similarly to standard RKHS, RKHSvv are uniquely characterized by an operator-valued kernel:

Definition 2.1.

Let be a set and be a Hilbert space of functions from to . A symmetric, positive definite, matrix valued function is called a reproducing kernel for if for all and we have that and the following reproducing property holds:

Analogously to the scalar setting, a Representer theorem holds, stating that the solution to the regularized learning problem


is of the form with , the matrix-valued kernel associated to the RKHSvv and

a loss function (e.g. least squares, hinge, logistic, etc.) which we assume to be convex. We point out that the setting above can also account for the case where not all task outputs

associated to a given input are available in training. Such situation would arise for instance in multi-detection problems in which supervision (e.g. presence/absence of an object class in the image) is provided only for a few tasks at the time.

2.1.1 Separable Kernels

Depending on the choice of operator-valued kernel , different structures can be enforced among the tasks; this effect can be observed by restricting ourselves to the family of separable kernels. Separable kernels are matrix-valued functions of the form , where is a scalar reproducing kernel and a positive semidefinite (PSD) matrix. Intuitively, the scalar kernel characterizes the individual tasks functions, while the matrix describes how they are related. Indeed, from the Representer theorem we have that solutions of problem (1) are of the form with the -th task being , a scalar function in the RKHS associated to kernel . As shown in [10], in this case the squared norm associated to the separable kernel in the RKHSvv , can be written as


with the -th entry of ’s pseudo-inverse.

Eq. (2) shows how can model the structural relations among tasks by directly coupling predictors: for instance, by setting , with the vector of all s, we have that the parameter

controls the variance

of the tasks with respect to their mean . If we have access to some notion of similarity among tasks in the form of a graph with adjacency matrix , we can consider the regularizer which corresponds to setting with the graph Laplacian induced by . We refer the reader to [10] for more examples of possible choices for when the tasks structure is known.

2.2 Sparse Kernel Multi Task Learning

When a-priori knowledge of the problem structure is not available, it is desirable to learn the tasks relations directly from the data. In light of the observations of Sec. 2.1.1, a viable approach is to parametrize the RKHSvv in problem (1) with the associated separable kernel and to optimize jointly with respect to both and . In the following we show how this problem corresponds to that of identifying a set of latent tasks and to combine them in order to form the individual predictors. By enforcing a sparsity prior on the set of such possible combinations, we then propose the Sparse Kernel MTL model, which is designed to recover only the most relevant tasks relations. In Sec. 2.2.2 we discuss, from a modeling perspective, how our framework is related to the current multi-task learning literature.

2.2.1 Recovering the Most Relevant Relations

From the Representer theorem introduced in Sec. 2.1 we know that a candidate solution to problem (1) can be parametrized in terms of the maps , by a structure matrix and a set of coefficient vectors such that . If now we consider the -th component of (i.e. the predictor of the -th task), we have that


where we set for and the -th component of . Eq. (3) provides further understanding on how can enforce/describe the tasks relations: The can be interpreted as elements in a dictionary and each factorizes as their linear combination. Therefore, any two predictors and are implicitly coupled by the subset of common .

We consider the setting where the tasks structure is unknown and we aim to recover it from the available data in the form of a structure matrix

. Following a denoising/feature selection argument, our approach consists in imposing a sparsity penalty on the set of possible tasks structures, requiring each predictor

to be described by a small subset of . Indeed, by requiring most of ’s entries to be equal to zero, we implicitly enforce the system to recover only the most relevant tasks relations. The benefits of this approach are two-fold: on the one hand it is less sensitive to spurious statistically non-significant tasks-correlations that could for instance arise when few training examples are available. On the other hand it provides us with interpretable tasks structures, which is a problem of interest in its own right and relevant, for example, in cognitive science [17].

Following the de-facto standard choice of -norm regularization to impose sparsity in convex settings, the Sparse Kernel MTL problem can be formulated as


where , is a loss function and , , and regularization parameters. Here regulates the amount of desired entry-wise sparsity of with respect to the low-rank prior (indeed notice that for we recover the low-rank inducing framework of [2, 28]). This prior was empirically observed (see [2, 28]) to indeed encourage information transfer across tasks; the sparsity term can therefore be interpreted as enforcing such transfer to occur only between tasks that are strongly correlated. Finally the term ensures the existence of a unique solution (making the problem strictly convex), and can be interpreted as a preconditioning of the problem (see Sec. 3.2).

Notice that the term depends on both and (see Eq. 2), thus making problem (4) non-separable in the two variables. However, it can be shown that the objective functional is jointly convex in and (we refer the reader to the Appendix for a proof of convexity, which extends results in [2] to our setting). This will allow in Sec. 3 to derive an optimization strategy that is guaranteed to converge to a global solution.

2.2.2 Previous Work on Learning the Relations among Tasks

Several methods designed to recover the tasks relations from the data can be formulated using our notation as joint learning problems in and . Depending on the expected/desired tasks-structure a set of constraints can be imposed on when solving a joint problem as in (4):

  • Multi-task Relation Learning [28]. In [28], the relaxation of the low-rank constraint is imposed, enforcing the tasks to span a low-dimensional subspace in . This method can be shown to be approximately equivalent to [2].

  • Output Kernel Learning [8]. Rather than imposing a hard constraint, the authors penalize the structure matrix with the squared Frobenius norm .

  • Cluster Multi-task Learning [13]. Assuming tasks to be organized into distinct clusters, in [13] a learning scheme to recover such structure is proposed, which consists of imposing a suitable set of spectral constraints on . We refer the reader to the supplementary material for further details.

  • Learning Graph Relations [3]. Following the interpretation in [10] reviewed in Sec. 2.1.1 of imposing similarity relations among tasks in the form of a graph, in [3] the authors propose a setting where a (relaxed) Graph Laplacian constraint is imposed on .

3 Optimization

Due to the clear block variable structure of Eq. (4) with respect to and , we propose an alternating minimization approach (see Alg. 1) to iteratively solve the Sparse Kernel MTL problem by keeping fixed one variable at the time. This choice is motivated by the fact that for a fixed , problem (4) reduces to the standard multi-task learning problem (1), for which several well-established optimization strategies have already been considered [1, 20, 10, 21]

. The alternating minimization procedure can be interpreted as iterating between steps of supervised learning (finding the

that best fits the input-output training observations) and unsupervised learning (finding the best describing the tasks structure, which does not involve the output data).

3.1 Solving w.r.t. (Supervised Step)

Let be a fixed structure matrix. From the Representer theorem (see Sec. 2.1) we know that the solution of problem (1) is of the form with . Depending on the specific loss , different methods can be employed to find such coefficients . In particular, for the least-square loss a closed form solution can be derived by taking the coefficient vector to be [1]:


where is the empirical kernel matrix associated to the scalar kernel, is the vector concatenating the training outputs and denotes the Kronecker product. A faster and more compact solution was proposed in [21] by adopting Sylvester’s method.

Input: empirical kernel matrix, training outputs, tolerance, loss, hyperparameters, objective functional of problem (4).
Initialize: , and
Algorithm 1 Alternating Minimization

3.2 Solving w.r.t the Tasks Structure (Unsupervised Step)

Let be known in terms of its coefficents . Our goal is to find the structure matrix that minimizes problem (4). Notice that each task can be written as with . Therefore, from eq. (2) we have


where we have used the reproducing property of for the last equality. Eq. (6) allows to write the norm induced by the separable kernel in the more compact matrix notation , where is the matrix with -th element .

Under this new notation, problem (4) with fixed becomes


from which we can clearly see the effect of as a preconditioning term for the tasks covariance matrix .

By employing recent results from the non-smooth convex optimization literature, in the following we will describe an algorithm to optimize the Sparse Kernel MTL problem.

3.2.1 Primal-dual Splitting Algorithm

First order proximal splitting algorithms have been successfully applied to solve convex composite optimization problems, that can be written as the sum of a smooth component with nonsmooth ones [4]. They proceed by splitting, i.e. by activating each term appearing in the sum individually. The iteration usually consists of a gradient descent-like step determined by the smooth component, and various proximal steps induced by the nonsmooth terms [4]. In the following we will describe one of such methods, derived in [26, 7], to solve the Sparse Kernel MTL problem in eq. (7). The proposed method is primal-dual, in the sense that it also provides an additional dual sequence solving the associated dual optimization problem. We will rely on the sum structure of the objective function, that can be written as , with , and , where is the indicator function of a ( on the set outside) and enforces the hard constraint . is a linear operator defined as , where we have set . We recall here that a square root of a PSD matrix is a PSD matrix such that . Note that is smooth with Lipschitz continuous gradient, is a linear operator and both and are functions for which the proximal operator can be computed in closed form. We recall that the proximity operator at a point of a proper, convex and l.s.c. function , is defined as


It is well known that for any , the proximal map of the norm is the so-called soft-thresholding operator , which can be computed in closed form. The following result provides an explicit closed-form solution also for the proximal map of .

Proposition 3.1.

Let with eigendecomposition with orthonormal matrix and diagonal. Then


can be computed in closed form as with diagonal matrix with the only positive root of the polynomial with .


Note that is convex and lsc. Therefore the proximity operator is well-defined and the functional in (9) has a unique minimizer. Its gradient is , therefore, the first order condition for a matrix to be a minimizer is


We show that it is possible to find diagonal such that solves eq. (10). Indeed, for

with same set of eigenvectors

as , we have that eq. (10) becomes , which is equivalent to the set of scalar equations for and . Descartes rule of sign [23] assures that for any each of these polynomials has exactly one positive root, which can be clearly computed in closed form. ∎

Input: , , tolerance, hyperparameter.
Initialize: , ,

squared maximum eigenvalue of

until  and
Algorithm 2 Sparse Kernel MTL

We have the following result as an immediate consequence.

Theorem 3.2 (Convergence of Sparse Kernel MTL, [26, 7]).

Let be a scalar kernel over a space , a set of points and a function characterized by a set of coefficients so that . Set to be the empirical kernel matrix associated to and the points and the matrix whose -th row corresponds to the (transposed) coefficient vector .

Then, any sequence of matrices produced by Algorithm (2) converges to a global minimizer of the Sparse Kernel MTL problem (4) (or, equivalently, to (7)) for fixed . Furthermore, the sequence converges to a solution of the dual problem of (7).

3.3 Convergence of Alternating Minimization

We additionally exploit the sum structure and the regularity properties of the objective functional in (4) to prove convergence of the alternating minimization scheme to a global minimum. We rely on the results in [25]. In particular, the following result is a direct application of Theorem 4.1 in that paper.

Theorem 3.3.

Under the same assumptions as in Theorem 3.2, the sequence generated by Algorithm 1 is a minimizing sequence for Problem 4 and converges to its unique solution.


Let denote the objective function in (4). First note that the level sets of are compact due to the presence of the term and that is continuous on each level set. Moreover, since is regular at each point in the interior of the domain and is convex, [25, Theorem 4.1(c)] implies that each cluster point of is the unique minimizer of . Then, the sequence itself is convergent and is minimizing by continuity. ∎

3.3.1 A Note on Computational Complexity & Times

Regarding the computational costs/number of iterations required for the convergence of the whole Alg. 1, up to our knowledge the only results available on rates for Alternating Minimization are in [5]. Unfortunately these results hold only for smooth settings. Notice however that each iteration of Alg 2 is of the order of , (the eigendecomposition of A being the most expensive operation) and its convergence rate is with equal to the number of iterations. Hence, Alg. 2 is not affected by the number of training samples. On the contrary, the supervised step in Agl. 1 (e.g. RLS or SVM) typically requires the inversion of the kernel matrix (or some approximation of its inverse) whose complexity heavily depends on (order of for inversion). Furthermore, the product costs which, since , is more expensive than Alg. 1. Thus, with respect to SKMTL scales exactly as methods such as [2,7,24].

4 Empirical Analysis

We report the empirical evaluation of SKMTL on artificial and real datasets. We have conducted experiments on both artificially generated and real dataset to assess the capabilities of the proposed Sparse Kernel MTL method to recover the most relevant relations among tasks and exploit such knowledge to improve the prediction performance.

Figure 1:

Generalization performance (nMSE and standard deviation) of different multi-task methods with respect to the sparsity of the task structure matrix.

4.1 Synthetic Data

We considered an artificial setting that allows us to control the tasks structure and in particular the actual sparsity of the tasks-relation matrix. We generated synthetic datasets of input-output pairs according to linear models of the form where is a matrix with orthonormal columns, is the task structure matrix and is zero-mean Gaussian noise with variance . The inputs

were sampled according to a Gaussian distribution with zero mean and identity covariance matrix. We set the input space dimension

for our experiments.

In order to quantitatively control the sparsity level of the tasks-relation matrix, we randomly generated so that the ratio between its support (i.e. the number of non-zero entries) and the total number of entries would vary between ( sparsity) and (no sparsity). A Gaussian noise with zero mean and variance of the mean value of the non-zero entries in was sampled to corrupt the structure matrix entries (hence, the model was never “really” sparse). This was done to reproduce a more realistic scenario.

We generated multiple models and corresponding datasets for different sparsity ratios and number of tasks ranging from to . For each dataset we generated respectively samples for training and for test. We performed multi-task regression using the following methods: single task learning (STL) as baseline, Multi-task Relation Learning [28] (MTRL), Output Kernel Learning [8] (OKL), our Sparse Kernel MTL (SKMTL) and a fixed task-structure multi-task regression algorithm solving problem (1) using the ground truth (GT) matrix (after noise corruption) for regularization. We chose least-square loss and performed model selection with five-fold cross validation.

Figure 2: Structure matrix . True (Left) and recovered by Sparse Kernel MTL (Right). We report the absolute value of the entries of the two matrices. The range of values goes from 0 (Blue) to 1 (Red)

In Figure 1 we report the normalized mean squared error (nMSE) of tested method with respect to decreasing sparsity ratios. It can be noticed that knowledge of the true (GT) is particularly beneficial when the tasks share few relations. This advantage tends to decrease as the tasks structure becomes less sparse. Interestingly, both the MTRL and OKL method do not provide any advantage with respect to the STL baseline since we did not design to be low-rank (or have a fast eigenvalue decay). On the contrary, the SKMTL method provides a remarkable improvement over the STL baseline.

We point out that the large error bars in the plot are due to the high variability of the nMSE with respect to the different (random) linear models and number of tasks . The actual improvement of the SKMTL over the other methods is however significant.

The results above suggest that, as desired, our SKMTL method is actually recovering the most relevant relations among tasks. In support of this statement we report in Figure 2 an example of the true (uncorrupted) and recovered structure matrix in the case of and sparsity. As can be noticed, while the actual values in the entries of the two matrices are not exactly the same, their supports almost coincide, showing that SKMTL was able to recover the correct tasks structure.

4.2 15-Scenes

We tested SKMTL in a multi-class classification scenario for visual scene categorization, the -scenes dataset111http://www-cvr.ai.uiuc.edu/ponce_grp/data/. The dataset contains images depicting natural or urban scenes that have been organized in distinct groups and the goal is to assign each image to the correct scene category. It is natural to expect that categories will share similar visual features. Our aim was to investigate whether these relations would be recovered by the SKMTL method and result beneficial to the actual classification process.

We represented images in the dataset with LLC coding [27]

, trained multi-class classifiers on

, and examples per class and tested them on samples per class. We repeated these classification experiments times to account for statistical variability.

In Table 1 we report the classification accuracy of the multi-class learning methods tested: STL (baseline), Multi-task Feature Learning (MTFL) [2], MTRL, OKL and our SKMTL. For all methods we used a linear kernel and least-squares loss as plug-in classifier. Model selection was performed by five-fold cross-validation.

Accuracy (%) per
# tr. samples per class


MTFL [2]
MTRL [28]
OKL [8]
Table 1: Classification results on the -scene dataset. Four multi-task methods and the single-task baseline are compared.

As it can be noticed, the SKMTL consistently outperforms all other methods. A possible motivation for this behavior, similarly to the synthetic scenario, is that the algorithm is actually recovering the most relevant relations among tasks and using this information to improve prediction. In support of this interpretation, in Figure 3 we report the relations recovered by SKMTL in graph form. An edge between two scene categories and was drawn whenever the value of the corresponding entry of the recovered structure matrix was different from zero. Noticeably SKMTL seems to identify a clear group separation between natural and urban scenes. Furthermore, also within these two main clusters, not all tasks are connected: for instance office scenes are not related to scenes depicting the exterior of buildings or mountain scenes are not connected to images featuring mostly flat scenes such as highways or coastal regions.

Figure 3: Tasks structure graph recovered by the Sparse Kernel MTL (SKMTL) proposed in this work on the -scenes dataset.

4.3 Animals with Attributes

Animals with Attributes222http://attributes.kyb.tuebingen.mpg.de/ (AwA) is a dataset designed to benchmark detection algorithms in computer vision. The dataset comprises different animal classes each annotated with binary labels denoting the presence/absence of different attributes. These attributes can be of different nature such as color (white, black, etc.), texture (stripes, dots), type of limbs (hands, flippers, etc.), diet and so on. The standard challenge is to perform attribute detection by training the system on a predefined set of animal classes and testing on the remaining . In the following we will first discuss the performance of multi-task approaches in this setting and then investigate how the benefits of multi-task approaches can sometime be dulled by the so-called “negative transfer” and how our Sparse Kernel MTL method seems to be less sensitive to such an issue. For the experiments described in the following we used the DECAF features [9] recently made available on the Animals With Attribute website.

4.3.1 Attribute Detection

We considered the multi-task problem of attribute detection which consists in classification (binary) tasks. For each attribute, we randomly sampled , and examples for training, for validation and for test. Results were averaged over trials. In Table 2 we report the Average Precision (area under the precision/recall curve) of the multi-task classifiers tested. As can be noticed for all multi-task approaches, the effect of sharing information across classifiers seems to have a remarkable impact when few training examples are available (the or columns in Table 2). As expected, such benefit decreases as the role of regularization becomes less crucial ().

AUC (%) per #tr. samples per class
50 100 150


Table 2: Attribute detection results on the Animals with Attributes dataset.

4.3.2 Attribute Prediction - Color Vs Limb Shape

Multi-task learning approaches ground on the assumption that tasks are strongly related one to the other and that such structure can be exploited to improve overall prediction. When this assumption doesn’t hold, or holds only partially (e.g. only some tasks have common structure), such methods could even result disadvantageous (“negative transfer” [22]).

The AwA dataset offers the possibility to observe this effect since attributes are organized into multiple semantic groups [18, 14]. We focused on a smaller setting by selecting only two group of tasks, namely color and limb shape, and tested the effect of training multi-task methods jointly or independently across such two groups. For all the experiments we randomly sampled for each class examples for training, for validation and for test, averaging the system performance over trials. Table 3 reports the average precision separately for the color and limb shape groups.

Interestingly, methods relying on the assumption that all tasks share a common structure, such as MTFL, MTRL or OKL, experience a slight drop in performance when trained on all attribute detection tasks together (right columns) rather than separately (left column). On the contrary, SKMTL remains stable since it correctly separates the two groups.

Area under PR Curve (%)
Independent Joint
Color Limb Color Limb



Table 3: Attribute detection on two subsets of AwA. Comparison between methods trained independently or jointly on the two sets show the effects of negative transfer.

5 Conclusions

We proposed a learning framework designed to solve multiple related tasks while simultaneously recovering their structure. We considered the setting of Reproducing Kernel Hilbert Spaces for vector-valued functions [20] and formulated the Sparse Kernel MTL as an output kernel learning problem where both a multi-task predictor and a matrix encoding the tasks relations are inferred from empirical data. We imposed a sparsity penalty on the set of possible relations among tasks in order to recover only those that are more relevant to the learning problem.

Adopting an alternating minimization strategy we were able to devise an optimization algorithm that provably converges to the global solution of the proposed learning problem. Empirical evaluation on both synthetic and real dataset confirmed the validity of the model proposed, which successfully recovered interpretable structures while at the same time outperformed previous methods.

Future research directions will focus mainly on modeling aspects: it will be interesting to investigate the possibility to combine our framework, which identifies sparse relations among the tasks, with recent multi-task linear models that take a different perspective and enforce tasks relations in the form of structured sparsity penalties on the feature space [14, 29].


  • [1] M. Álvarez, N. Lawrence, and L. Rosasco. Kernels for vector-valued functions: a review. Foundations and Trends in Machine Learning, 4(3):195–266, 2012. see also http://arxiv.org/abs/1106.6251.
  • [2] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73, 2008.
  • [3] A. Argyriou and Â. E. C. Paris. Learning the graph of relations among multiple tasks.
  • [4] H. H. Bauschke and P. L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. Springer, New York, 2011. With a foreword by Hédy Attouch.
  • [5] A. Beck and L. Tetruashvili. On the convergence of block coordinate descent type methods. Technion, Israel Institute of Technology, Haifa, Israel, Tech. Rep, 2011.
  • [6] S. P. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
  • [7] L. Condat. A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl., 158(2):460–479, 2013.
  • [8] F. Dinuzzo, C. S. Ong, P. Gehler, and G. Pillonetto. Learning output kernels with block coordinate descent. International Conference on Machine Learning, 2011.
  • [9] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. CoRR, abs/1310.1531, 2013.
  • [10] T. Evgeniou, C. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. In Journal of Machine Learning Research, pages 615–637, 2005.
  • [11] R. Fergus, H. Bernal, Y. Weiss, and A. Torralba. Semantic label sharing for learning with many categories. European Conference on Computer Vision, 2010.
  • [12] S. J. Hwang, F. Sha, and K. Grauman. Sharing features between objects and their attributes. In

    Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on

    , pages 1761–1768. IEEE, 2011.
  • [13] L. Jacob, F. Bach, and J.-P. Vert. Clustered multi-task learning: a convex formulation. Advances in Neural Information Processing Systems, 2008.
  • [14] D. Jayaraman, F. Sha, and K. Grauman. Decorrelating semantic visual attributes by resisting the urge to share. In CVPR, 2014.
  • [15] Z. Kang, K. Grauman, and F. Sha. Learning with whom to share in multi-task feature learning. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 521–528, 2011.
  • [16] A. Kumar and H. Daume III. Learning task grouping and overlap in multi-task learning. arXiv preprint arXiv:1206.6417, 2012.
  • [17] B. M. Lake and J. B. Tenenbaum. Discovering structure by learning sparse graphs. Proceedings of the 32nd Cognitive Science Conference, 2010.
  • [18] C. Lampert. Semantic attributes for object categorization (slides). http://ist.ac.at/ chl/talks/lampert-vrml2011b.pdf, 2011.
  • [19] A. Lozano and V. Sindhwani. Block variable selection in multivariate regression and high-dimensional causal inference. Advances in Neural Information Processing Systems, 2011.
  • [20] C. A. Micchelli and M. Pontil. Kernels for multi-task learning. Advances in Neural Information Processing Systems, 2004.
  • [21] H. Q. Minh and V. Sindhwani. Vector-valued manifold regularization. International Conference on Machine Learning, 2011.
  • [22] R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learning to share visual appearance for multiclass object detection. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1481–1488. IEEE, 2011.
  • [23] D. J. Struik. A source book in mathematics 1200–1800. Princeton University Press, pages 89–93, 1986.
  • [24] A. Torralba, K. P. Murphy, and W. T. Freeman. Sharing features: efficient boosting procedures for multiclass object detection. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II–762. IEEE, 2004.
  • [25] P. Tseng. Convergence of block coordinate descent method for nondifferentiable minimization. Journal of Optimization Theory and Applications, 109:475–494, 2001.
  • [26] B. C. Vũ. A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comput. Math., 38(3):667–681, 2013.
  • [27] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for image classification. In CVPR, 2010.
  • [28] Y. Zhang and D.-Y. Yeung. A convex formulation for learning task relationships in multi-task learning. In

    Proceedings of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-10)

    , pages 733–742, Corvallis, Oregon, 2010. AUAI Press.
  • [29] W. Zhong and J. Kwok. Convex multitask learning with flexible task clusters. In J. Langford and J. Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12), ICML ’12, pages 49–56, New York, NY, USA, July 2012. Omnipress.

6 Appendix

6.1 On the (joint) convexity of Sparse Kernel MTL

As stated in the paper, it can be shown that the Sparse Kernel MTL problem introduced in Eq. (4) is jointly convex in the two optimization variables and . The proof of this fact requires the introduction of functional analysis tools that are beyond the scope of this work. Indeed, according to equation (6) we have observed that it is possible to restrict the SKMTL problem to functions of the form with . The following result proves the joint-convexity of Eq. (4) for this setting. It is an extension of similar results in [2, 28] and we give it here for completeness.

Proposition 6.1.

Let be a convex loss function. Then the functional in problem (4) – restricted to functions of the form with – is convex in both and .


Notice that, the only term that requires some care is the component of the functional that is mixing and together, namely (where the dependency to is implicit in . Indeed, since is chosen to be convex, the empirical risk term is clearly convex in and does not depend on , while all the remaining terms are – i.e. the , and – penalize only the structure matrix and are clearly convex with respect to it.

According to Eq. (6) and we have that can be rewritten as , with the empirical kernel matrix and the matrix whose rows correspond to . Let us now set the vectorization of matrix , obtained by concatenating the columns of . Then we have that


In order to show that the function is jointly convex in and we will show that its epigraph is a convex set. To see this notice that


where the second equality is directly derived from a Schur’s complement argument. Consider now any couple of points and any . We clearly have that the convex combination


still belongs to , which implies that


therefore proving that is jointly convex in and .

6.2 Cluster Multi-task Learning

We briefly recall here the Convex Multi-task Cluster Learning proposed in [13] and show that it can be cast in the same framework as that of our Sparse Kernel MTL model. In particular we comment what choice of constraint set can be imposed on the structure matrix to recover clustered structures of tasks.

In the setting proposed by [13], tasks are assumed to belong to one of of unknown clusters, with fixed a priori. While the original formulation is for the linear kernel, it can be easily extended to the non-linear setting of RKHSvv. Let be the binary matrix whose entry has value whenever a task belongs to cluster , and otherwise. Let be the normalized Laplacian of the Graph defined by . Set , and . As we have observed in Eq. (6), the regularizer depends on . The role of this term could be shaped to reflect the structure of the clusters encoded in the Laplacian , hence in the matrix . As noted in  [13] can be chosen so that:


where the first term is a global penalty on the average predictor, the second term penalizes the between cluster variance, and the third term penalizes the within cluster variance. Since belongs to a discrete set, the authors propose a relaxation for by constraining it to be in a convex set which directly induces a set of spectral constraints for .