Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix Optimization Problems

02/08/2022
by   Dan Garber, et al.
0

Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning. While significant progress has been made in recent years in developing efficient methods for smooth low-rank optimization problems that avoid maintaining high-rank matrices and computing expensive high-rank SVDs, advances for nonsmooth problems have been slow paced. In this paper we consider standard convex relaxations for such problems. Mainly, we prove that under a natural generalized strict complementarity condition and under the relatively mild assumption that the nonsmooth objective can be written as a maximum of smooth functions, the extragradient method, when initialized with a "warm-start" point, converges to an optimal solution with rate O(1/t) while requiring only two low-rank SVDs per iteration. We give a precise trade-off between the rank of the SVDs required and the radius of the ball in which we need to initialize the method. We support our theoretical results with empirical experiments on several nonsmooth low-rank matrix recovery tasks, demonstrating that using simple initializations, the extragradient method produces exactly the same iterates when full-rank SVDs are replaced with SVDs of rank that matches the rank of the (low-rank) ground-truth matrix to be recovered.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset