On the global convergence of randomized coordinate gradient descent for non-convex optimization

01/05/2021
by   Ziang Chen, et al.
0

In this work, we analyze the global convergence property of coordinate gradient descent with random choice of coordinates and stepsizes for non-convex optimization problems. Under generic assumptions, we prove that the algorithm iterate will almost surely escape strict saddle points of the objective function. As a result, the algorithm is guaranteed to converge to local minima if all saddle points are strict. Our proof is based on viewing coordinate descent algorithm as a nonlinear random dynamical system and a quantitative finite block analysis of its linearization around saddle points.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset