GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning

01/31/2023
by   Xin Dong, et al.
0

Multi-task learning (MTL) aims at solving multiple related tasks simultaneously and has experienced rapid growth in recent years. However, MTL models often suffer from performance degeneration with negative transfer due to learning several tasks simultaneously. Some related work attributed the source of the problem is the conflicting gradients. In this case, it is needed to select useful gradient updates for all tasks carefully. To this end, we propose a novel optimization approach for MTL, named GDOD, which manipulates gradients of each task using an orthogonal basis decomposed from the span of all task gradients. GDOD decomposes gradients into task-shared and task-conflict components explicitly and adopts a general update rule for avoiding interference across all task gradients. This allows guiding the update directions depending on the task-shared components. Moreover, we prove the convergence of GDOD theoretically under both convex and non-convex assumptions. Experiment results on several multi-task datasets not only demonstrate the significant improvement of GDOD performed to existing MTL models but also prove that our algorithm outperforms state-of-the-art optimization methods in terms of AUC and Logloss metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2023

ForkMerge: Overcoming Negative Transfer in Multi-Task Learning

The goal of multi-task learning is to utilize useful knowledge from mult...
research
12/14/2019

Regularizing Deep Multi-Task Networks using Orthogonal Gradients

Deep neural networks are a promising approach towards multi-task learnin...
research
03/03/2021

Rotograd: Dynamic Gradient Homogenization for Multi-Task Learning

While multi-task learning (MTL) has been successfully applied in several...
research
02/02/2022

Multi-Task Learning as a Bargaining Game

In Multi-task learning (MTL), a joint model is trained to simultaneously...
research
03/04/2021

Multi-task Learning with High-Dimensional Noisy Images

Recent medical imaging studies have given rise to distinct but inter-rel...
research
06/18/2019

Learning Personalized Attribute Preference via Multi-task AUC Optimization

Traditionally, most of the existing attribute learning methods are train...
research
05/30/2023

Independent Component Alignment for Multi-Task Learning

In a multi-task learning (MTL) setting, a single model is trained to tac...

Please sign up or login with your details

Forgot password? Click here to reset