A Closer Look at Loss Weighting in Multi-Task Learning

11/20/2021
by   Baijiong Lin, et al.
0

Multi-Task Learning (MTL) has achieved great success in various fields, however, how to balance different tasks to avoid negative effects is still a key problem. To achieve the task balancing, there exist many works to balance task losses or gradients. In this paper, we unify eight representative task balancing methods from the perspective of loss weighting and provide a consistent experimental comparison. Moreover, we surprisingly find that training a MTL model with random weights sampled from a distribution can achieve comparable performance over state-of-the-art baselines. Based on this finding, we propose a simple yet effective weighting strategy called Random Loss Weighting (RLW), which can be implemented in only one additional line of code over existing works. Theoretically, we analyze the convergence of RLW and reveal that RLW has a higher probability to escape local minima than existing models with fixed task weights, resulting in a better generalization ability. Empirically, we extensively evaluate the proposed RLW method on six image datasets and four multilingual tasks from the XTREME benchmark to show the effectiveness of the proposed RLW strategy when compared with state-of-the-art strategies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2022

Mitigating Negative Transfer in Multi-Task Learning with Exponential Moving Average Loss Weighting Strategies

Multi-Task Learning (MTL) is a growing subject of interest in deep learn...
research
03/16/2023

Efficient Diffusion Training via Min-SNR Weighting Strategy

Denoising diffusion models have been a mainstream approach for image gen...
research
02/12/2020

A Simple General Approach to Balance Task Difficulty in Multi-Task Learning

In multi-task learning, difficulty levels of different tasks are varying...
research
01/07/2020

Dynamic Task Weighting Methods for Multi-task Networks in Autonomous Driving Systems

Deep multi-task networks are of particular interest for autonomous drivi...
research
09/16/2021

SLAW: Scaled Loss Approximate Weighting for Efficient Multi-Task Learning

Multi-task learning (MTL) is a subfield of machine learning with importa...
research
08/26/2020

HydaLearn: Highly Dynamic Task Weighting for Multi-task Learning with Auxiliary Tasks

Multi-task learning (MTL) can improve performance on a task by sharing r...
research
09/21/2023

Multi-Task Cooperative Learning via Searching for Flat Minima

Multi-task learning (MTL) has shown great potential in medical image ana...

Please sign up or login with your details

Forgot password? Click here to reset