Exact Pareto Optimal Search for Multi-Task Learning: Touring the Pareto Front

08/02/2021
by   Debabrata Mahapatra, et al.
0

Multi-Task Learning (MTL) is a well-established paradigm for training deep neural network models for multiple correlated tasks. Often the task objectives conflict, requiring trade-offs between them during model building. In such cases, MTL models can use gradient-based multi-objective optimization (MOO) to find one or more Pareto optimal solutions. A common requirement in MTL applications is to find an Exact Pareto optimal (EPO) solution, which satisfies user preferences with respect to task-specific objective functions. Further, to improve model generalization, various constraints on the weights may need to be enforced during training. Addressing these requirements is challenging because it requires a search direction that allows descent not only towards the Pareto front but also towards the input preference, within the constraints imposed and in a manner that scales to high-dimensional gradients. We design and theoretically analyze such search directions and develop the first scalable algorithm, with theoretical guarantees of convergence, to find an EPO solution, including when box and equality constraints are imposed. Our unique method combines multiple gradient descent with carefully controlled ascent to traverse the Pareto front in a principled manner, making it robust to initialization. This also facilitates systematic exploration of the Pareto front, that we utilize to approximate the Pareto front for multi-criteria decision-making. Empirical results show that our algorithm outperforms competing methods on benchmark MTL datasets and MOO problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2019

Pareto Multi-Task Learning

Multi-task learning is a powerful method for solving multiple correlated...
research
10/17/2021

Pareto Navigation Gradient Descent: a First-Order Algorithm for Optimization in Pareto Set

Many modern machine learning applications, such as multi-task learning, ...
research
06/29/2020

Efficient Continuous Pareto Exploration in Multi-Task Learning

Tasks in multi-task learning often correlate, conflict, or even compete ...
research
10/10/2018

Multi-Task Learning as Multi-Objective Optimization

In multi-task learning, multiple tasks are solved jointly, sharing induc...
research
10/18/2022

Pareto Manifold Learning: Tackling multiple tasks via ensembles of single-task models

In Multi-Task Learning, tasks may compete and limit the performance achi...
research
08/23/2023

Multi-Objective Optimization for Sparse Deep Neural Network Training

Different conflicting optimization criteria arise naturally in various D...
research
07/24/2021

Training multi-objective/multi-task collocation physics-informed neural network with student/teachers transfer learnings

This paper presents a PINN training framework that employs (1) pre-train...

Please sign up or login with your details

Forgot password? Click here to reset