Revisiting Scalarization in Multi-Task Learning: A Theoretical Perspective

08/27/2023
by   Yuzheng Hu, et al.
0

Linear scalarization, i.e., combining all loss functions by a weighted sum, has been the default choice in the literature of multi-task learning (MTL) since its inception. In recent years, there is a surge of interest in developing Specialized Multi-Task Optimizers (SMTOs) that treat MTL as a multi-objective optimization problem. However, it remains open whether there is a fundamental advantage of SMTOs over scalarization. In fact, heated debates exist in the community comparing these two types of algorithms, mostly from an empirical perspective. To approach the above question, in this paper, we revisit scalarization from a theoretical perspective. We focus on linear MTL models and study whether scalarization is capable of fully exploring the Pareto front. Our findings reveal that, in contrast to recent works that claimed empirical advantages of scalarization, scalarization is inherently incapable of full exploration, especially for those Pareto optimal solutions that strike the balanced trade-offs between multiple tasks. More concretely, when the model is under-parametrized, we reveal a multi-surface structure of the feasible region and identify necessary and sufficient conditions for full exploration. This leads to the conclusion that scalarization is in general incapable of tracing out the Pareto front. Our theoretical results partially answer the open questions in Xin et al. (2021), and provide a more intuitive explanation on why scalarization fails beyond non-convexity. We additionally perform experiments on a real-world dataset using both scalarization and state-of-the-art SMTOs. The experimental results not only corroborate our theoretical findings, but also unveil the potential of SMTOs in finding balanced solutions, which cannot be achieved by scalarization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2020

Efficient Continuous Pareto Exploration in Multi-Task Learning

Tasks in multi-task learning often correlate, conflict, or even compete ...
research
12/30/2019

Pareto Multi-Task Learning

Multi-task learning is a powerful method for solving multiple correlated...
research
10/10/2018

Multi-Task Learning as Multi-Objective Optimization

In multi-task learning, multiple tasks are solved jointly, sharing induc...
research
02/12/2020

A Simple General Approach to Balance Task Difficulty in Multi-Task Learning

In multi-task learning, difficulty levels of different tasks are varying...
research
10/18/2022

Pareto Manifold Learning: Tackling multiple tasks via ensembles of single-task models

In Multi-Task Learning, tasks may compete and limit the performance achi...
research
10/02/2021

Fast Line Search for Multi-Task Learning

Multi-task learning is a powerful method for solving several tasks joint...

Please sign up or login with your details

Forgot password? Click here to reset