Rotograd: Dynamic Gradient Homogenization for Multi-Task Learning

03/03/2021
by   Adrián Javaloy, et al.
12

While multi-task learning (MTL) has been successfully applied in several domains, it still triggers challenges. As a consequence of negative transfer, simultaneously learning several tasks can lead to unexpectedly poor results. A key factor contributing to this undesirable behavior is the problem of conflicting gradients. In this paper, we propose a novel approach for MTL, Rotograd, which homogenizes the gradient directions across all tasks by rotating their shared representation. Our algorithm is formalized as a Stackelberg game, which allows us to provide stability guarantees. Rotograd can be transparently combined with task-weighting approaches (e.g., GradNorm) to mitigate negative transfer, resulting in a robust learning process. Thorough empirical evaluation on several architectures (e.g., ResNet) and datasets (e.g., CIFAR) verifies our theoretical results, and shows that Rotograd outperforms previous approaches. A Pytorch implementation can be found in https://github.com/adrianjav/rotograd .

READ FULL TEXT

page 14

page 17

research
01/30/2023

ForkMerge: Overcoming Negative Transfer in Multi-Task Learning

The goal of multi-task learning is to utilize useful knowledge from mult...
research
01/31/2023

GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning

Multi-task learning (MTL) aims at solving multiple related tasks simulta...
research
11/22/2022

Mitigating Negative Transfer in Multi-Task Learning with Exponential Moving Average Loss Weighting Strategies

Multi-Task Learning (MTL) is a growing subject of interest in deep learn...
research
07/07/2023

Mitigating Negative Transfer with Task Awareness for Sexism, Hate Speech, and Toxic Language Detection

This paper proposes a novelty approach to mitigate the negative transfer...
research
08/05/2020

Learning Boost by Exploiting the Auxiliary Task in Multi-task Domain

Learning two tasks in a single shared function has some benefits. Firstl...
research
02/02/2022

Multi-Task Learning as a Bargaining Game

In Multi-task learning (MTL), a joint model is trained to simultaneously...
research
09/03/2020

Large Dimensional Analysis and Improvement of Multi Task Learning

Multi Task Learning (MTL) efficiently leverages useful information conta...

Please sign up or login with your details

Forgot password? Click here to reset