Learning Boost by Exploiting the Auxiliary Task in Multi-task Domain

08/05/2020
by   Jonghwa Yim, et al.
0

Learning two tasks in a single shared function has some benefits. Firstly by acquiring information from the second task, the shared function leverages useful information that could have been neglected or underestimated in the first task. Secondly, it helps to generalize the function that can be learned using generally applicable information for both tasks. To fully enjoy these benefits, Multi-task Learning (MTL) has long been researched in various domains such as computer vision, language understanding, and speech synthesis. While MTL benefits from the positive transfer of information from multiple tasks, in a real environment, tasks inevitably have a conflict between them during the learning phase, called negative transfer. The negative transfer hampers function from achieving the optimality and degrades the performance. To solve the problem of the task conflict, previous works only suggested partial solutions that are not fundamental, but ad-hoc. A common approach is using a weighted sum of losses. The weights are adjusted to induce positive transfer. Paradoxically, this kind of solution acknowledges the problem of negative transfer and cannot remove it unless the weight of the task is set to zero. Therefore, these previous methods had limited success. In this paper, we introduce a novel approach that can drive positive transfer and suppress negative transfer by leveraging class-wise weights in the learning process. The weights act as an arbitrator of the fundamental unit of information to determine its positive or negative status to the main task.

READ FULL TEXT
research
01/30/2023

ForkMerge: Overcoming Negative Transfer in Multi-Task Learning

The goal of multi-task learning is to utilize useful knowledge from mult...
research
12/13/2022

Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?

Traditional multi-task learning architectures train a single model acros...
research
03/03/2021

Rotograd: Dynamic Gradient Homogenization for Multi-Task Learning

While multi-task learning (MTL) has been successfully applied in several...
research
07/07/2023

Mitigating Negative Transfer with Task Awareness for Sexism, Hate Speech, and Toxic Language Detection

This paper proposes a novelty approach to mitigate the negative transfer...
research
12/17/2020

Task Uncertainty Loss Reduce Negative Transfer in Asymmetric Multi-task Feature Learning

Multi-task learning (MTL) is frequently used in settings where a target ...
research
09/24/2022

Highly Scalable Task Grouping for Deep Multi-Task Learning in Prediction of Epigenetic Events

Deep neural networks trained for predicting cellular events from DNA seq...
research
07/25/2023

When Multi-Task Learning Meets Partial Supervision: A Computer Vision Review

Multi-Task Learning (MTL) aims to learn multiple tasks simultaneously wh...

Please sign up or login with your details

Forgot password? Click here to reset