MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning

02/18/2023
by   Caoyun Fan, et al.
0

When modeling related tasks in computer vision, Multi-Task Learning (MTL) can outperform Single-Task Learning (STL) due to its ability to capture intrinsic relatedness among tasks. However, MTL may encounter the insufficient training problem, i.e., some tasks in MTL may encounter non-optimal situation compared with STL. A series of studies point out that too much gradient noise would lead to performance degradation in STL, however, in the MTL scenario, Inter-Task Gradient Noise (ITGN) is an additional source of gradient noise for each task, which can also affect the optimization process. In this paper, we point out ITGN as a key factor leading to the insufficient training problem. We define the Gradient-to-Noise Ratio (GNR) to measure the relative magnitude of gradient noise and design the MaxGNR algorithm to alleviate the ITGN interference of each task by maximizing the GNR of each task. We carefully evaluate our MaxGNR algorithm on two standard image MTL datasets: NYUv2 and Cityscapes. The results show that our algorithm outperforms the baselines under identical experimental conditions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2021

Efficiently Identifying Task Groupings for Multi-Task Learning

Multi-task learning can leverage information learned by one task to bene...
research
03/04/2021

Multi-task Learning with High-Dimensional Noisy Images

Recent medical imaging studies have given rise to distinct but inter-rel...
research
09/16/2021

SLAW: Scaled Loss Approximate Weighting for Efficient Multi-Task Learning

Multi-task learning (MTL) is a subfield of machine learning with importa...
research
12/19/2022

Positive-incentive Noise

Noise is conventionally viewed as a severe problem in diverse fields, e....
research
02/01/2022

Multi-Order Networks for Action Unit Detection

Deep multi-task methods, where several tasks are learned within a single...
research
11/24/2022

Improving Multi-task Learning via Seeking Task-based Flat Regions

Multi-Task Learning (MTL) is a widely-used and powerful learning paradig...
research
04/30/2023

Multi-Task Structural Learning using Local Task Similarity induced Neuron Creation and Removal

Multi-task learning has the potential to improve generalization by maxim...

Please sign up or login with your details

Forgot password? Click here to reset