Multi-Task Learning as Multi-Objective Optimization

10/10/2018
by   Ozan Sener, et al.
4

In multi-task learning, multiple tasks are solved jointly, sharing inductive bias between them. Multi-task learning is inherently a multi-objective problem because different tasks may conflict, necessitating a trade-off. A common compromise is to optimize a proxy objective that minimizes a weighted linear combination of per-task losses. However, this workaround is only valid when the tasks do not compete, which is rarely the case. In this paper, we explicitly cast multi-task learning as multi-objective optimization, with the overall objective of finding a Pareto optimal solution. To this end, we use algorithms developed in the gradient-based multi-objective optimization literature. These algorithms are not directly applicable to large-scale learning problems since they scale poorly with the dimensionality of the gradients and the number of tasks. We therefore propose an upper bound for the multi-objective loss and show that it can be optimized efficiently. We further prove that optimizing this upper bound yields a Pareto optimal solution under realistic assumptions. We apply our method to a variety of multi-task deep learning problems including digit classification, scene understanding (joint semantic segmentation, instance segmentation, and depth estimation), and multi-label classification. Our method produces higher-performing models than recent multi-task learning formulations or per-task training.

READ FULL TEXT

page 13

page 14

page 15

research
06/29/2020

Efficient Continuous Pareto Exploration in Multi-Task Learning

Tasks in multi-task learning often correlate, conflict, or even compete ...
research
10/23/2022

Mitigating Gradient Bias in Multi-objective Learning: A Provably Convergent Stochastic Approach

Machine learning problems with multiple objective functions appear eithe...
research
08/02/2021

Exact Pareto Optimal Search for Multi-Task Learning: Touring the Pareto Front

Multi-Task Learning (MTL) is a well-established paradigm for training de...
research
02/12/2020

A Simple General Approach to Balance Task Difficulty in Multi-Task Learning

In multi-task learning, difficulty levels of different tasks are varying...
research
10/18/2022

Pareto Manifold Learning: Tackling multiple tasks via ensembles of single-task models

In Multi-Task Learning, tasks may compete and limit the performance achi...
research
07/24/2021

Training multi-objective/multi-task collocation physics-informed neural network with student/teachers transfer learnings

This paper presents a PINN training framework that employs (1) pre-train...
research
08/27/2023

Revisiting Scalarization in Multi-Task Learning: A Theoretical Perspective

Linear scalarization, i.e., combining all loss functions by a weighted s...

Please sign up or login with your details

Forgot password? Click here to reset