The Benefit of Multitask Representation Learning

05/23/2015
by   Andreas Maurer, et al.
0

We discuss a general method to learn data representations from multiple tasks. We provide a justification for this method in both settings of multitask learning and learning-to-learn. The method is illustrated in detail in the special case of linear feature learning. Conditions on the theoretical advantage offered by multitask representation learning over independent task learning are established. In particular, focusing on the important example of half-space learning, we derive the regime in which multitask representation learning is beneficial over independent task learning, as a function of the sample size, the number of tasks and the intrinsic data dimensionality. Other potential applications of our results include multitask feature learning in reproducing kernel Hilbert spaces and multilayer, deep networks.

READ FULL TEXT

page 17

page 19

page 20

research
06/15/2021

On the Power of Multitask Representation Learning in Linear MDP

While multitask representation learning has become a popular approach in...
research
04/03/2019

On Better Exploring and Exploiting Task Relationships in Multi-Task Learning: Joint Model and Feature Learning

Multitask learning (MTL) aims to learn multiple tasks simultaneously thr...
research
08/14/2020

MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities

In this paper, we introduce the MLM (Multiple Languages and Modalities) ...
research
03/02/2017

Self-Paced Multitask Learning with Shared Knowledge

This paper introduces self-paced task selection to multitask learning, w...
research
09/04/2012

Sparse coding for multitask and transfer learning

We investigate the use of sparse coding and dictionary learning in the c...
research
12/17/2020

Learning and Sharing: A Multitask Genetic Programming Approach to Image Feature Learning

Using evolutionary computation algorithms to solve multiple tasks with k...
research
02/08/2022

Generative multitask learning mitigates target-causing confounding

We propose a simple and scalable approach to causal representation learn...

Please sign up or login with your details

Forgot password? Click here to reset