On the Convergence of Stochastic Gradient Descent with Low-Rank Projections for Convex Low-Rank Matrix Problems

01/31/2020
by   Dan Garber, et al.
0

We revisit the use of Stochastic Gradient Descent (SGD) for solving convex optimization problems that serve as highly popular convex relaxations for many important low-rank matrix recovery problems such as matrix completion, phase retrieval, and more. The computational limitation of applying SGD to solving these relaxations in large-scale is the need to compute a potentially high-rank singular value decomposition (SVD) on each iteration in order to enforce the low-rank-promoting constraint. We begin by considering a simple and natural sufficient condition so that these relaxations indeed admit low-rank solutions. This condition is also necessary for a certain notion of low-rank-robustness to hold. Our main result shows that under this condition which involves the eigenvalues of the gradient vector at optimal points, SGD with mini-batches, when initialized with a "warm-start" point, produces iterates that are low-rank with high probability, and hence only a low-rank SVD computation is required on each iteration. This suggests that SGD may indeed be practically applicable to solving large-scale convex relaxations of low-rank matrix recovery problems. Our theoretical results are accompanied with supporting preliminary empirical evidence. As a side benefit, our analysis is quite simple and short.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2014

Global Convergence of Stochastic Gradient Descent for Some Non-convex Matrix Problems

Stochastic gradient descent (SGD) on a low-rank factorization is commonl...
research
02/08/2022

Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix Optimization Problems

Low-rank and nonsmooth matrix optimization problems capture many fundame...
research
06/23/2022

Low-Rank Mirror-Prox for Nonsmooth and Low-Rank Matrix Optimization Problems

Low-rank and nonsmooth matrix optimization problems capture many fundame...
research
09/27/2018

Fast Stochastic Algorithms for Low-rank and Nonsmooth Matrix Problems

Composite convex optimization problems which include both a nonsmooth te...
research
06/10/2020

Sketchy Empirical Natural Gradient Methods for Deep Learning

In this paper, we develop an efficient sketchy empirical natural gradien...
research
03/23/2023

The Probabilistic Stability of Stochastic Gradient Descent

A fundamental open problem in deep learning theory is how to define and ...

Please sign up or login with your details

Forgot password? Click here to reset