Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big Task

01/07/2022
by   Quan Feng, et al.
0

Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks. Existing MTL works mainly focus on the scenario where label sets among multiple tasks (MTs) are usually the same, thus they can be utilized for learning across the tasks. While almost rare works explore the scenario where each task only has a small amount of training samples, and their label sets are just partially overlapped or even not. Learning such MTs is more challenging because of less correlation information available among these tasks. For this, we propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks and the information shared among those partially-overlapped tasks. In our implementation of using the same neural network architecture of the learnt auxiliary task to learn individual tasks, the key idea is to utilize available label information to adaptively prune the hidden layer neurons of the auxiliary network to construct corresponding network for each task, while accompanying a joint learning across individual tasks. Our experimental results demonstrate its effectiveness in comparison with the state-of-the-art approaches.

READ FULL TEXT
research
07/02/2020

A Brief Review of Deep Multi-task Learning and Auxiliary Task Learning

Multi-task learning (MTL) optimizes several learning tasks simultaneousl...
research
01/29/2021

Learning Twofold Heterogeneous Multi-Task by Sharing Similar Convolution Kernel Pairs

Heterogeneous multi-task learning (HMTL) is an important topic in multi-...
research
03/14/2023

Relational Multi-Task Learning: Modeling Relations between Data and Tasks

A key assumption in multi-task learning is that at the inference time th...
research
10/13/2022

Composite Learning for Robust and Effective Dense Predictions

Multi-task learning promises better model generalization on a target tas...
research
11/29/2021

Learning Multiple Dense Prediction Tasks from Partially Annotated Data

Despite the recent advances in multi-task learning of dense prediction p...
research
12/03/2014

Curriculum Learning of Multiple Tasks

Sharing information between multiple tasks enables algorithms to achieve...
research
03/28/2023

Exposing and Addressing Cross-Task Inconsistency in Unified Vision-Language Models

As general purpose vision models get increasingly effective at a wide se...

Please sign up or login with your details

Forgot password? Click here to reset