Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods

02/09/2016
by   Guoyin Li, et al.
0

In this paper, we study the Kurdyka-Łojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo-Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is 1/2. The Luo-Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function's KL exponent is 1/2. This includes the least squares problem with smoothly clipped absolute deviation (SCAD) regularization or minimax concave penalty (MCP) regularization and the logistic regression problem with ℓ_1 regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step-sizes for some specific models that arise in sparse recovery.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2019

Deducing Kurdyka-Łojasiewicz exponent via inf-projection

Kurdyka-Łojasiewicz (KL) exponent plays an important role in estimating ...
research
10/10/2021

Convergence of Random Reshuffling Under The Kurdyka-Łojasiewicz Inequality

We study the random reshuffling (RR) method for smooth nonconvex optimiz...
research
04/19/2018

A refined convergence analysis of pDCA_e with applications to simultaneous sparse recovery and outlier detection

We consider the problem of minimizing a difference-of-convex (DC) functi...
research
02/12/2018

Convergence Analysis of Alternating Nonconvex Projections

We consider the convergence properties for alternating projection algori...
research
03/29/2023

An inexact linearized proximal algorithm for a class of DC composite optimization problems and applications

This paper is concerned with a class of DC composite optimization proble...
research
05/10/2023

Convergence of a Normal Map-based Prox-SGD Method under the KL Inequality

In this paper, we present a novel stochastic normal map-based algorithm ...
research
11/18/2017

Proximal Gradient Method with Extrapolation and Line Search for a Class of Nonconvex and Nonsmooth Problems

In this paper, we consider a class of possibly nonconvex, nonsmooth and ...

Please sign up or login with your details

Forgot password? Click here to reset