Learning Functions to Study the Benefit of Multitask Learning

06/09/2020
by   Gabriele Bettgenhäuser, et al.
0

We study and quantify the generalization patterns of multitask learning (MTL) models for sequence labeling tasks. MTL models are trained to optimize a set of related tasks jointly. Although multitask learning has achieved improved performance in some problems, there are also tasks that lose performance when trained together. These mixed results motivate us to study the factors that impact the performance of MTL models. We note that theoretical bounds and convergence rates for MTL models exist, but they rely on strong assumptions such as task relatedness and the use of balanced datasets. To remedy these limitations, we propose the creation of a task simulator and the use of Symbolic Regression to learn expressions relating model performance to possible factors of influence. For MTL, we study the model performance against the number of tasks (T), the number of samples per task (n) and the task relatedness measured by the adjusted mutual information (AMI). In our experiments, we could empirically find formulas relating model performance with factors of sqrt(n), sqrt(T), which are equivalent to sound mathematical proofs in Maurer[2016], and we went beyond by discovering that performance relates to a factor of sqrt(AMI).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/11/2018

Pseudo-task Augmentation: From Deep Multitask Learning to Intratask Sharing---and Back

Deep multitask learning boosts performance by sharing learned structure ...
research
01/31/2023

Friend-training: Learning from Models of Different but Related Tasks

Current self-training methods such as standard self-training, co-trainin...
research
08/09/2018

The Effectiveness of Multitask Learning for Phenotyping with Electronic Health Records Data

Electronic phenotyping, which is the task of ascertaining whether an ind...
research
06/24/2022

Multitask vocal burst modeling with ResNets and pre-trained paralinguistic Conformers

This technical report presents the modeling approaches used in our submi...
research
03/25/2023

Identification of Negative Transfers in Multitask Learning Using Surrogate Models

Multitask learning is widely used in practice to train a low-resource ta...
research
03/21/2019

A Principled Approach for Learning Task Similarity in Multitask Learning

Multitask learning aims at solving a set of related tasks simultaneously...
research
06/29/2020

A No-Free-Lunch Theorem for MultiTask Learning

Multitask learning and related areas such as multi-source domain adaptat...

Please sign up or login with your details

Forgot password? Click here to reset