A Simple General Approach to Balance Task Difficulty in Multi-Task Learning

02/12/2020
by   Sicong Liang, et al.
0

In multi-task learning, difficulty levels of different tasks are varying. There are many works to handle this situation and we classify them into five categories, including the direct sum approach, the weighted sum approach, the maximum approach, the curriculum learning approach, and the multi-objective optimization approach. Those approaches have their own limitations, for example, using manually designed rules to update task weights, non-smooth objective function, and failing to incorporate other functions than training losses. In this paper, to alleviate those limitations, we propose a Balanced Multi-Task Learning (BMTL) framework. Different from existing studies which rely on task weighting, the BMTL framework proposes to transform the training loss of each task to balance difficulty levels among tasks based on an intuitive idea that tasks with larger training losses will receive more attention during the optimization procedure. We analyze the transformation function and derive necessary conditions. The proposed BMTL framework is very simple and it can be combined with most multi-task learning models. Empirical studies show the state-of-the-art performance of the proposed BMTL framework.

READ FULL TEXT
research
09/03/2020

Multi-Loss Weighting with Coefficient of Variations

Many interesting tasks in machine learning and computer vision are learn...
research
11/20/2021

A Closer Look at Loss Weighting in Multi-Task Learning

Multi-Task Learning (MTL) has achieved great success in various fields, ...
research
10/10/2018

Multi-Task Learning as Multi-Objective Optimization

In multi-task learning, multiple tasks are solved jointly, sharing induc...
research
09/16/2021

SLAW: Scaled Loss Approximate Weighting for Efficient Multi-Task Learning

Multi-task learning (MTL) is a subfield of machine learning with importa...
research
07/25/2017

A Survey on Multi-Task Learning

Multi-Task Learning (MTL) is a learning paradigm in machine learning and...
research
09/21/2021

Optimization Strategies in Multi-Task Learning: Averaged or Separated Losses?

In Multi-Task Learning (MTL), it is a common practice to train multi-tas...
research
08/27/2023

Revisiting Scalarization in Multi-Task Learning: A Theoretical Perspective

Linear scalarization, i.e., combining all loss functions by a weighted s...

Please sign up or login with your details

Forgot password? Click here to reset