SLAW: Scaled Loss Approximate Weighting for Efficient Multi-Task Learning

09/16/2021
by   Michael Crawshaw, et al.
13

Multi-task learning (MTL) is a subfield of machine learning with important applications, but the multi-objective nature of optimization in MTL leads to difficulties in balancing training between tasks. The best MTL optimization methods require individually computing the gradient of each task's loss function, which impedes scalability to a large number of tasks. In this paper, we propose Scaled Loss Approximate Weighting (SLAW), a method for multi-task optimization that matches the performance of the best existing methods while being much more efficient. SLAW balances learning between tasks by estimating the magnitudes of each task's gradient without performing any extra backward passes. We provide theoretical and empirical justification for SLAW's estimation of gradient magnitudes. Experimental results on non-linear regression, multi-task computer vision, and virtual screening for drug discovery demonstrate that SLAW is significantly more efficient than strong baselines without sacrificing performance and applicable to a diverse range of domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2021

Docking-based Virtual Screening with Multi-Task Learning

Machine learning shows great potential in virtual screening for drug dis...
research
10/26/2021

Conflict-Averse Gradient Descent for Multi-task Learning

The goal of multi-task learning is to enable more efficient learning tha...
research
02/19/2018

Subspace Network: Deep Multi-Task Censored Regression for Modeling Neurodegenerative Diseases

Over the past decade a wide spectrum of machine learning models have bee...
research
02/18/2023

MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning

When modeling related tasks in computer vision, Multi-Task Learning (MTL...
research
02/12/2020

A Simple General Approach to Balance Task Difficulty in Multi-Task Learning

In multi-task learning, difficulty levels of different tasks are varying...
research
06/03/2015

Multi-Objective Optimization for Self-Adjusting Weighted Gradient in Machine Learning Tasks

Much of the focus in machine learning research is placed in creating new...
research
11/20/2021

A Closer Look at Loss Weighting in Multi-Task Learning

Multi-Task Learning (MTL) has achieved great success in various fields, ...

Please sign up or login with your details

Forgot password? Click here to reset