Gradient Surgery for Multi-Task Learning

01/19/2020
by   Tianhe Yu, et al.
19

While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood. In this work, we identify a set of three conditions of the multi-task optimization landscape that cause detrimental gradient interference, and develop a simple yet general approach for avoiding such interference between task gradients. We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task RL problems, this approach leads to substantial gains in efficiency and performance. Further, it is model-agnostic and can be combined with previously-proposed multi-task architectures for enhanced performance.

READ FULL TEXT
research
02/03/2018

Multi-task Learning for Continuous Control

Reliable and effective multi-task learning is a prerequisite for the dev...
research
10/26/2021

Conflict-Averse Gradient Descent for Multi-task Learning

The goal of multi-task learning is to enable more efficient learning tha...
research
03/10/2023

Adaptive Weight Assignment Scheme For Multi-task Learning

Deep learning based models are used regularly in every applications nowa...
research
05/31/2019

Modular Universal Reparameterization: Deep Multi-task Learning Across Diverse Domains

As deep learning applications continue to become more diverse, an intere...
research
10/12/2020

Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models

Massively multilingual models subsuming tens or even hundreds of languag...
research
03/21/2019

Towards automatic construction of multi-network models for heterogeneous multi-task learning

Multi-task learning, as it is understood nowadays, consists of using one...
research
08/19/2022

Curbing Task Interference using Representation Similarity-Guided Multi-Task Feature Sharing

Multi-task learning of dense prediction tasks, by sharing both the encod...

Please sign up or login with your details

Forgot password? Click here to reset