DeepAI AI Chat
Log In Sign Up

An accelerated proximal gradient method for multiobjective optimization

by   Hiroki Tanabe, et al.

Many descent methods for multiobjective optimization problems have been developed in recent years. In 2000, the steepest descent method was proposed for differentiable multiobjective optimization problems. Afterward, the proximal gradient method, which can solve composite problems, was also considered. However, the accelerated versions are not sufficiently studied. In this paper, we propose a multiobjective accelerated proximal gradient algorithm, in which we solve subproblems with terms that only appear in the multiobjective case. We also show the proposed method's global convergence rate (O(1/k^2)) under reasonable assumptions, using a merit function to measure the complexity. Moreover, we present an efficient way to solve the subproblem via its dual, and we confirm the validity of the proposed method through preliminary numerical experiments.


page 1

page 2

page 3

page 4


Accelerated proximal boosting

Gradient boosting is a prediction method that iteratively combines weak ...

Constrained and Composite Optimization via Adaptive Sampling Methods

The motivation for this paper stems from the desire to develop an adapti...

On the Convergence Rate of Projected Gradient Descent for a Back-Projection based Objective

Ill-posed linear inverse problems appear in many fields of imaging scien...

Flower Pollination Algorithm: A Novel Approach for Multiobjective Optimization

Multiobjective design optimization problems require multiobjective optim...

Linearization Algorithms for Fully Composite Optimization

In this paper, we study first-order algorithms for solving fully composi...

Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient Descent

Data representation techniques have made a substantial contribution to a...