When is multitask learning effective? Semantic sequence prediction under varying data conditions

12/07/2016
by   Héctor Martínez Alonso, et al.
0

Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2017

Semi-supervised Multitask Learning for Sequence Labeling

We propose a sequence labeling framework with a secondary training objec...
research
07/17/2018

Hierarchical Multitask Learning for CTC-based Speech Recognition

Previous work has shown that neural encoder-decoder speech recognition c...
research
03/24/2021

Active Multitask Learning with Committees

The cost of annotating training data has traditionally been a bottleneck...
research
05/01/2018

Multitask Parsing Across Semantic Representations

The ability to consolidate information of different types is at the core...
research
08/09/2018

The Effectiveness of Multitask Learning for Phenotyping with Electronic Health Records Data

Electronic phenotyping, which is the task of ascertaining whether an ind...
research
05/23/2017

Consistent Multitask Learning with Nonlinear Output Relations

Key to multitask learning is exploiting relationships between different ...
research
08/25/2021

Auxiliary Task Update Decomposition: The Good, The Bad and The Neutral

While deep learning has been very beneficial in data-rich settings, task...

Please sign up or login with your details

Forgot password? Click here to reset