A No-Free-Lunch Theorem for MultiTask Learning

06/29/2020
by   Steve Hanneke, et al.
0

Multitask learning and related areas such as multi-source domain adaptation address modern settings where datasets from N related distributions {P_t} are to be combined towards improving performance on any single such distribution D. A perplexing fact remains in the evolving theory on the subject: while we would hope for performance bounds that account for the contribution from multiple tasks, the vast majority of analyses result in bounds that improve at best in the number n of samples per task, but most often do not improve in N. As such, it might seem at first that the distributional settings or aggregation procedures considered in such analyses might be somehow unfavorable; however, as we show, the picture happens to be more nuanced, with interestingly hard regimes that might appear otherwise favorable. In particular, we consider a seemingly favorable classification scenario where all tasks P_t share a common optimal classifier h^*, and which can be shown to admit a broad range of regimes with improved oracle rates in terms of N and n. Some of our main results are as follows: ∙ We show that, even though such regimes admit minimax rates accounting for both n and N, no adaptive algorithm exists; that is, without access to distributional information, no algorithm can guarantee rates that improve with large N for n fixed. ∙ With a bit of additional information, namely, a ranking of tasks {P_t} according to their distance to a target D, a simple rank-based procedure can achieve near optimal aggregations of tasks' datasets, despite a search space exponential in N. Interestingly, the optimal aggregation might exclude certain tasks, even though they all share the same h^*.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2014

Bayesian Multitask Learning with Latent Hierarchies

We learn multiple hypotheses for related tasks under a latent hierarchic...
research
04/18/2022

Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond

The vast majority of existing algorithms for unsupervised domain adaptat...
research
05/26/2017

Multiple Source Domain Adaptation with Adversarial Training of Neural Networks

While domain adaptation has been actively researched in recent years, mo...
research
06/13/2022

Provable Benefit of Multitask Representation Learning in Reinforcement Learning

As representation learning becomes a powerful technique to reduce sample...
research
10/27/2019

Spectral Algorithm for Low-rank Multitask Regression

Multitask learning, i.e. taking advantage of the relatedness of individu...
research
06/09/2020

Learning Functions to Study the Benefit of Multitask Learning

We study and quantify the generalization patterns of multitask learning ...
research
12/08/2009

Hyper-sparse optimal aggregation

In this paper, we consider the problem of "hyper-sparse aggregation". Na...

Please sign up or login with your details

Forgot password? Click here to reset