Convergence of Random Reshuffling Under The Kurdyka-Łojasiewicz Inequality

10/10/2021
by   Xiao Li, et al.
10

We study the random reshuffling (RR) method for smooth nonconvex optimization problems with a finite-sum structure. Though this method is widely utilized in practice such as the training of neural networks, its convergence behavior is only understood in several limited settings. In this paper, under the well-known Kurdyka-Lojasiewicz (KL) inequality, we establish strong limit-point convergence results for RR with appropriate diminishing step sizes, namely, the whole sequence of iterates generated by RR is convergent and converges to a single stationary point in an almost sure sense. In addition, we derive the corresponding rate of convergence, depending on the KL exponent and the suitably selected diminishing step sizes. When the KL exponent lies in [0,1/2], the convergence is at a rate of 𝒪(t^-1) with t counting the iteration number. When the KL exponent belongs to (1/2,1), our derived convergence rate is of the form 𝒪(t^-q) with q∈ (0,1) depending on the KL exponent. The standard KL inequality-based convergence analysis framework only applies to algorithms with a certain descent property. Remarkably, we conduct convergence analysis for the non-descent RR with diminishing step sizes based on the KL inequality, which generalizes the standard KL analysis framework. We summarize our main steps and core ideas in an analysis framework, which is of independent interest. As a direct application of this framework, we also establish similar strong limit-point convergence results for the shuffled proximal point method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2016

Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods

In this paper, we study the Kurdyka-Łojasiewicz (KL) exponent, an import...
research
05/10/2023

Convergence of a Normal Map-based Prox-SGD Method under the KL Inequality

In this paper, we present a novel stochastic normal map-based algorithm ...
research
02/10/2019

Deducing Kurdyka-Łojasiewicz exponent via inf-projection

Kurdyka-Łojasiewicz (KL) exponent plays an important role in estimating ...
research
02/12/2018

Convergence Analysis of Alternating Nonconvex Projections

We consider the convergence properties for alternating projection algori...
research
08/30/2022

Convergence Rates of Training Deep Neural Networks via Alternating Minimization Methods

Training deep neural networks (DNNs) is an important and challenging opt...
research
05/04/2023

The complexity of first-order optimization methods from a metric perspective

A central tool for understanding first-order optimization algorithms is ...
research
02/02/2022

Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions

Performance of optimization on quadratic problems sensitively depends on...

Please sign up or login with your details

Forgot password? Click here to reset