Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding

08/19/2022
by   Zhaoye Fei, et al.
7

Generalized text representations are the foundation of many natural language understanding tasks. To fully utilize the different corpus, it is inevitable that models need to understand the relevance among them. However, many methods ignore the relevance and adopt a single-channel model (a coarse paradigm) directly for all tasks, which lacks enough rationality and interpretation. In addition, some existing works learn downstream tasks by stitches skill block(a fine paradigm), which might cause irrationalresults due to its redundancy and noise. Inthis work, we first analyze the task correlation through three different perspectives, i.e., data property, manual design, and model-based relevance, based on which the similar tasks are grouped together. Then, we propose a hierarchical framework with a coarse-to-fine paradigm, with the bottom level shared to all the tasks, the mid-level divided to different groups, and the top-level assigned to each of the tasks. This allows our model to learn basic language properties from all tasks, boost performance on relevant tasks, and reduce the negative impact from irrelevant tasks. Our experiments on 13 benchmark datasets across five natural language understanding tasks demonstrate the superiority of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2022

One Model, Multiple Tasks: Pathways for Natural Language Understanding

This paper presents a Pathways approach to handle many tasks at once. Ou...
research
07/19/2019

Discourse-Based Evaluation of Language Understanding

We introduce DiscEval, a compilation of 11 evaluation datasets with a fo...
research
03/06/2023

Model-Agnostic Meta-Learning for Natural Language Understanding Tasks in Finance

Natural language understanding(NLU) is challenging for finance due to th...
research
03/16/2021

Robustly Optimized and Distilled Training for Natural Language Understanding

In this paper, we explore multi-task learning (MTL) as a second pretrain...
research
09/18/2019

A Lexical, Syntactic, and Semantic Perspective for Understanding Style in Text

With a growing interest in modeling inherent subjectivity in natural lan...
research
07/03/2019

Multi-Task Networks With Universe, Group, and Task Feature Learning

We present methods for multi-task learning that take advantage of natura...
research
02/16/2022

Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious Correlations from a Feature Perspective

Natural language understanding (NLU) models tend to rely on spurious cor...

Please sign up or login with your details

Forgot password? Click here to reset