Task Uncertainty Loss Reduce Negative Transfer in Asymmetric Multi-task Feature Learning

12/17/2020
by   Rafael Peres da Silva, et al.
0

Multi-task learning (MTL) is frequently used in settings where a target task has to be learnt based on limited training data, but knowledge can be leveraged from related auxiliary tasks. While MTL can improve task performance overall relative to single-task learning (STL), these improvements can hide negative transfer (NT), where STL may deliver better performance for many individual tasks. Asymmetric multitask feature learning (AMTFL) is an approach that tries to address this by allowing tasks with higher loss values to have smaller influence on feature representations for learning other tasks. Task loss values do not necessarily indicate reliability of models for a specific task. We present examples of NT in two orthogonal datasets (image recognition and pharmacogenomics) and tackle this challenge by using aleatoric homoscedastic uncertainty to capture the relative confidence between tasks, and set weights for task loss. Our results show that this approach reduces NT providing a new approach to enable robust MTL.

READ FULL TEXT

page 1

page 2

research
08/01/2017

Deep Asymmetric Multi-task Feature Learning

We propose Deep Asymmetric Multitask Feature Learning (Deep-AMTFL) which...
research
06/23/2020

Clinical Risk Prediction with Temporal Probabilistic Asymmetric Multi-Task Learning

Although recent multi-task learning methods have shown to be effective i...
research
03/06/2019

Representative Task Self-selection for Flexible Clustered Lifelong Learning

Consider the lifelong learning paradigm whose objective is to learn a se...
research
11/07/2022

Curriculum-based Asymmetric Multi-task Reinforcement Learning

We introduce CAMRL, the first curriculum-based asymmetric multi-task lea...
research
12/17/2020

Learning and Sharing: A Multitask Genetic Programming Approach to Image Feature Learning

Using evolutionary computation algorithms to solve multiple tasks with k...
research
12/02/2021

Transfer Learning in Conversational Analysis through Reusing Preprocessing Data as Supervisors

Conversational analysis systems are trained using noisy human labels and...
research
08/05/2020

Learning Boost by Exploiting the Auxiliary Task in Multi-task Domain

Learning two tasks in a single shared function has some benefits. Firstl...

Please sign up or login with your details

Forgot password? Click here to reset