Leveraging convergence behavior to balance conflicting tasks in multi-task learning

Multi-Task Learning is a learning paradigm that uses correlated tasks to improve performance generalization. A common way to learn multiple tasks is through the hard parameter sharing approach, in which a single architecture is used to share the same subset of parameters, creating an inductive bias between them during the training process. Due to its simplicity, potential to improve generalization, and reduce computational cost, it has gained the attention of the scientific and industrial communities. However, tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined to allow simultaneous learning. To address this problem, we use the idea of multi-objective optimization to propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation. The result of this method is to give more attention to the tasks that are diverging or that are not being benefited during the last iterations, allowing to ensure that the simultaneous learning is heading to the performance maximization of all tasks. As a result, we empirically show that the proposed method outperforms the state-of-art approaches on learning conflicting tasks. Unlike the adopted baselines, our method ensures that all tasks reach good generalization performances.

READ FULL TEXT
research
10/26/2021

Conflict-Averse Gradient Descent for Multi-task Learning

The goal of multi-task learning is to enable more efficient learning tha...
research
02/02/2022

Multi-Task Learning as a Bargaining Game

In Multi-task learning (MTL), a joint model is trained to simultaneously...
research
07/11/2020

Online Parameter-Free Learning of Multiple Low Variance Tasks

We propose a method to learn a common bias vector for a growing sequence...
research
10/23/2018

Meta-Learning Multi-task Communication

In this paper, we describe a general framework: Parameters Read-Write Ne...
research
06/17/2020

Maximum Roaming Multi-Task Learning

Multi-task learning has gained popularity due to the advantages it provi...
research
08/19/2022

Curbing Task Interference using Representation Similarity-Guided Multi-Task Feature Sharing

Multi-task learning of dense prediction tasks, by sharing both the encod...
research
04/05/2019

Learning Task Relatedness in Multi-Task Learning for Images in Context

Multimedia applications often require concurrent solutions to multiple t...

Please sign up or login with your details

Forgot password? Click here to reset