ForkMerge: Overcoming Negative Transfer in Multi-Task Learning

01/30/2023
by   Junguang Jiang, et al.
0

The goal of multi-task learning is to utilize useful knowledge from multiple related tasks to improve the generalization performance of all tasks. However, learning multiple tasks simultaneously often results in worse performance than learning them independently, which is known as negative transfer. Most previous works attribute negative transfer in multi-task learning to gradient conflicts between different tasks and propose several heuristics to manipulate the task gradients for mitigating this problem, which mainly considers the optimization difficulty and overlooks the generalization problem. To fully understand the root cause of negative transfer, we experimentally analyze negative transfer from the perspectives of optimization, generalization, and hypothesis space. Stemming from our analysis, we introduce ForkMerge, which periodically forks the model into multiple branches with different task weights, and merges dynamically to filter out detrimental parameter updates to avoid negative transfer. On a series of multi-task learning tasks, ForkMerge achieves improved performance over state-of-the-art methods and largely avoids negative transfer.

READ FULL TEXT

page 8

page 21

research
10/26/2021

Conflict-Averse Gradient Descent for Multi-task Learning

The goal of multi-task learning is to enable more efficient learning tha...
research
03/03/2021

Rotograd: Dynamic Gradient Homogenization for Multi-Task Learning

While multi-task learning (MTL) has been successfully applied in several...
research
01/31/2023

GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning

Multi-task learning (MTL) aims at solving multiple related tasks simulta...
research
08/05/2020

Learning Boost by Exploiting the Auxiliary Task in Multi-task Domain

Learning two tasks in a single shared function has some benefits. Firstl...
research
04/29/2020

Task-Feature Collaborative Learning with Application to Personalized Attribute Prediction

As an effective learning paradigm against insufficient training samples,...
research
09/24/2022

Highly Scalable Task Grouping for Deep Multi-Task Learning in Prediction of Epigenetic Events

Deep neural networks trained for predicting cellular events from DNA seq...
research
05/23/2023

When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP

Multi-task learning (MTL) aims at achieving a better model by leveraging...

Please sign up or login with your details

Forgot password? Click here to reset