On the complexity of convex inertial proximal algorithms

01/23/2018
by   Tao Sun, et al.
0

The inertial proximal gradient algorithm is efficient for the composite optimization problem. Recently, the convergence of a special inertial proximal gradient algorithm under strong convexity has been also studied. In this paper, we present more novel convergence complexity results, especially on the convergence rates of the function values. The non-ergodic O(1/k) rate is proved for inertial proximal gradient algorithm with constant stepzise when the objective function is coercive. When the objective function fails to promise coercivity, we prove the sublinear rate with diminishing inertial parameters. When the function satisfies some condition (which is much weaker than the strong convexity), the linear convergence is proved with much larger and general stepsize than previous literature. We also extend our results to the multi-block version and present the computational complexity. Both cyclic and stochastic index selection strategies are considered.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2019

General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General Scheme

The incremental aggregated gradient algorithm is popular in network opti...
research
01/17/2018

On the Proximal Gradient Algorithm with Alternated Inertia

In this paper, we investigate the attractive properties of the proximal ...
research
05/05/2020

Inertial Stochastic PALM and its Application for Learning Student-t Mixture Models

Inertial algorithms for minimizing nonsmooth and nonconvex functions as ...
research
08/10/2021

Computational complexity of Inexact Proximal Point Algorithm for Convex Optimization under Holderian Growth

Several decades ago the Proximal Point Algorithm (PPA) stated to gain a ...
research
04/07/2021

Time-Data Tradeoffs in Structured Signals Recovery via Proximal-Gradient Homotopy Method

In this paper, we characterize data-time tradeoffs of the proximal-gradi...
research
04/06/2019

Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization

Backtracking line-search is an old yet powerful strategy for finding bet...
research
10/08/2019

Bregman Proximal Framework for Deep Linear Neural Networks

A typical assumption for the analysis of first order optimization method...

Please sign up or login with your details

Forgot password? Click here to reset