Multi-Task Learning for Sequence Tagging: An Empirical Study

08/13/2018
by   Soravit Changpinyo, et al.
0

We study three general multi-task learning (MTL) approaches on 11 sequence tagging tasks. Our extensive empirical results show that in about 50 cases, jointly learning all 11 tasks improves upon either independent or pairwise learning of the tasks. We also show that pairwise MTL can inform us what tasks can benefit others or what tasks can be benefited if they are learned jointly. In particular, we identify tasks that can always benefit others as well as tasks that can always be harmed by others. Interestingly, one of our MTL approaches yields embeddings of the tasks that reveal the natural clustering of semantic and syntactic tasks. Our inquiries have opened the doors to further utilization of MTL in NLP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2018

Multi-task Learning over Graph Structures

We present two architectures for multi-task learning with neural sequenc...
research
08/29/2018

What can we learn from Semantic Tagging?

We investigate the effects of multi-task learning using the recently int...
research
12/03/2014

Curriculum Learning of Multiple Tasks

Sharing information between multiple tasks enables algorithms to achieve...
research
01/07/2023

"It's a Match!" – A Benchmark of Task Affinity Scores for Joint Learning

While the promises of Multi-Task Learning (MTL) are attractive, characte...
research
09/14/2021

The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders

Multi-task learning with transformer encoders (MTL) has emerged as a pow...
research
09/11/2020

Learning an Interpretable Graph Structure in Multi-Task Learning

We present a novel methodology to jointly perform multi-task learning an...
research
05/22/2018

Infinite-Task Learning with Vector-Valued RKHSs

Machine learning has witnessed the tremendous success of solving tasks d...

Please sign up or login with your details

Forgot password? Click here to reset