Independent Component Alignment for Multi-Task Learning

05/30/2023
by   Dmitry Senushkin, et al.
0

In a multi-task learning (MTL) setting, a single model is trained to tackle a diverse set of tasks jointly. Despite rapid progress in the field, MTL remains challenging due to optimization issues such as conflicting and dominating gradients. In this work, we propose using a condition number of a linear system of gradients as a stability criterion of an MTL optimization. We theoretically demonstrate that a condition number reflects the aforementioned optimization issues. Accordingly, we present Aligned-MTL, a novel MTL optimization approach based on the proposed criterion, that eliminates instability in the training process by aligning the orthogonal components of the linear system of gradients. While many recent MTL approaches guarantee convergence to a minimum, task trade-offs cannot be specified in advance. In contrast, Aligned-MTL provably converges to an optimal point with pre-defined task-specific weights, which provides more control over the optimization result. Through experiments, we show that the proposed approach consistently improves performance on a diverse set of MTL benchmarks, including semantic and instance segmentation, depth estimation, surface normal estimation, and reinforcement learning. The source code is publicly available at https://github.com/SamsungLabs/MTL .

READ FULL TEXT

page 2

page 3

page 16

research
10/26/2021

Conflict-Averse Gradient Descent for Multi-task Learning

The goal of multi-task learning is to enable more efficient learning tha...
research
10/29/2020

Measuring and Harnessing Transference in Multi-Task Learning

Multi-task learning can leverage information learned by one task to bene...
research
06/17/2022

Cross-task Attention Mechanism for Dense Multi-task Learning

Multi-task learning has recently become a promising solution for a compr...
research
08/21/2022

Multi-task Learning for Monocular Depth and Defocus Estimations with Real Images

Monocular depth estimation and defocus estimation are two fundamental ta...
research
01/31/2023

GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning

Multi-task learning (MTL) aims at solving multiple related tasks simulta...
research
03/06/2022

On Steering Multi-Annotations per Sample for Multi-Task Learning

The study of multi-task learning has drawn great attention from the comm...
research
11/07/2022

Curriculum-based Asymmetric Multi-task Reinforcement Learning

We introduce CAMRL, the first curriculum-based asymmetric multi-task lea...

Please sign up or login with your details

Forgot password? Click here to reset