Convergence of Random Reshuffling Under The Kurdyka-Łojasiewicz Inequality

10/10/2021
by   Xiao Li, et al.
10

We study the random reshuffling (RR) method for smooth nonconvex optimization problems with a finite-sum structure. Though this method is widely utilized in practice such as the training of neural networks, its convergence behavior is only understood in several limited settings. In this paper, under the well-known Kurdyka-Lojasiewicz (KL) inequality, we establish strong limit-point convergence results for RR with appropriate diminishing step sizes, namely, the whole sequence of iterates generated by RR is convergent and converges to a single stationary point in an almost sure sense. In addition, we derive the corresponding rate of convergence, depending on the KL exponent and the suitably selected diminishing step sizes. When the KL exponent lies in [0,1/2], the convergence is at a rate of 𝒪(t^-1) with t counting the iteration number. When the KL exponent belongs to (1/2,1), our derived convergence rate is of the form 𝒪(t^-q) with q∈ (0,1) depending on the KL exponent. The standard KL inequality-based convergence analysis framework only applies to algorithms with a certain descent property. Remarkably, we conduct convergence analysis for the non-descent RR with diminishing step sizes based on the KL inequality, which generalizes the standard KL analysis framework. We summarize our main steps and core ideas in an analysis framework, which is of independent interest. As a direct application of this framework, we also establish similar strong limit-point convergence results for the shuffled proximal point method.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/09/2016

Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods

In this paper, we study the Kurdyka-Łojasiewicz (KL) exponent, an import...
02/10/2019

Deducing Kurdyka-Łojasiewicz exponent via inf-projection

Kurdyka-Łojasiewicz (KL) exponent plays an important role in estimating ...
02/12/2018

Convergence Analysis of Alternating Nonconvex Projections

We consider the convergence properties for alternating projection algori...
02/02/2022

Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions

Performance of optimization on quadratic problems sensitively depends on...
11/04/2019

Proximal Langevin Algorithm: Rapid Convergence Under Isoperimetry

We study the Proximal Langevin Algorithm (PLA) for sampling from a proba...
01/09/2018

Better and Simpler Error Analysis of the Sinkhorn-Knopp Algorithm for Matrix Scaling

Given a non-negative n × m real matrix A, the matrix scaling problem is...
03/01/2018

Block Coordinate Descent for Deep Learning: Unified Convergence Guarantees

Training deep neural networks (DNNs) efficiently is a challenge due to t...