A Brief Review of Deep Multi-task Learning and Auxiliary Task Learning

07/02/2020
by   Partoo Vafaeikia, et al.
0

Multi-task learning (MTL) optimizes several learning tasks simultaneously and leverages their shared information to improve generalization and the prediction of the model for each task. Auxiliary tasks can be added to the main task to ultimately boost the performance. In this paper, we provide a brief review on the recent deep multi-task learning (dMTL) approaches followed by methods on selecting useful auxiliary tasks that can be used in dMTL to improve the performance of the model for the main task.

READ FULL TEXT
research
09/13/2023

Learning from Auxiliary Sources in Argumentative Revision Classification

We develop models to classify desirable reasoning revisions in argumenta...
research
08/16/2019

Transductive Auxiliary Task Self-Training for Neural Multi-Task Models

Multi-task learning and self-training are two common ways to improve a m...
research
05/16/2023

When is an SHM problem a Multi-Task-Learning problem?

Multi-task neural networks learn tasks simultaneously to improve individ...
research
06/10/2018

All-in-one: Multi-task Learning for Rumour Verification

Automatic resolution of rumours is a challenging task that can be broken...
research
07/06/2018

Multi-Task Learning with Incomplete Data for Healthcare

Multi-task learning is a type of transfer learning that trains multiple ...
research
01/07/2022

Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big Task

Multi-task learning is to improve the performance of the model by transf...
research
04/08/2019

AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning

Multi-task learning (MTL) has achieved success over a wide range of prob...

Please sign up or login with your details

Forgot password? Click here to reset