Pareto Navigation Gradient Descent: a First-Order Algorithm for Optimization in Pareto Set

10/17/2021
by   Mao Ye, et al.
10

Many modern machine learning applications, such as multi-task learning, require finding optimal model parameters to trade-off multiple objective functions that may conflict with each other. The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved. But it does not provide an actionable procedure for picking one or a few special models to return to practical users. In this paper, we consider optimization in Pareto set (OPT-in-Pareto), the problem of finding Pareto models that optimize an extra reference criterion function within the Pareto set. This function can either encode a specific preference from the users, or represent a generic diversity measure for obtaining a set of diversified Pareto models that are representative of the whole Pareto set. Unfortunately, despite being a highly useful framework, efficient algorithms for OPT-in-Pareto have been largely missing, especially for large-scale, non-convex, and non-linear objectives in deep learning. A naive approach is to apply Riemannian manifold gradient descent on the Pareto set, which yields a high computational cost due to the need for eigen-calculation of Hessian matrices. We propose a first-order algorithm that approximately solves OPT-in-Pareto using only gradient information, with both high practical efficiency and theoretically guaranteed convergence property. Empirically, we demonstrate that our method works efficiently for a variety of challenging multi-task-related problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2021

Exact Pareto Optimal Search for Multi-Task Learning: Touring the Pareto Front

Multi-Task Learning (MTL) is a well-established paradigm for training de...
research
09/15/2022

Efficient first-order predictor-corrector multiple objective optimization for fair misinformation detection

Multiple-objective optimization (MOO) aims to simultaneously optimize mu...
research
02/16/2022

How to Fill the Optimum Set? Population Gradient Descent with Harmless Diversity

Although traditional optimization methods focus on finding a single opti...
research
12/06/2021

Incentive Compatible Pareto Alignment for Multi-Source Large Graphs

In this paper, we focus on learning effective entity matching models ove...
research
12/07/2021

Multi-Task Learning on Networks

The multi-task learning (MTL) paradigm can be traced back to an early pa...
research
10/19/2022

A Pareto-optimal compositional energy-based model for sampling and optimization of protein sequences

Deep generative models have emerged as a popular machine learning-based ...
research
10/14/2022

Efficiently Controlling Multiple Risks with Pareto Testing

Machine learning applications frequently come with multiple diverse obje...

Please sign up or login with your details

Forgot password? Click here to reset