Low-Rank Mirror-Prox for Nonsmooth and Low-Rank Matrix Optimization Problems

06/23/2022
by   Dan Garber, et al.
0

Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning. While significant progress has been made in recent years in developing efficient methods for smooth low-rank optimization problems that avoid maintaining high-rank matrices and computing expensive high-rank SVDs, advances for nonsmooth problems have been slow paced. In this paper we consider standard convex relaxations for such problems. Mainly, we prove that under a strict complementarity condition and under the relatively mild assumption that the nonsmooth objective can be written as a maximum of smooth functions, approximated variants of two popular mirror-prox methods: the Euclidean extragradient method and mirror-prox with matrix exponentiated gradient updates, when initialized with a "warm-start", converge to an optimal solution with rate O(1/t), while requiring only two low-rank SVDs per iteration. Moreover, for the extragradient method we also consider relaxed versions of strict complementarity which yield a trade-off between the rank of the SVDs required and the radius of the ball in which we need to initialize the method. We support our theoretical results with empirical experiments on several nonsmooth low-rank matrix recovery tasks, demonstrating both the plausibility of the strict complementarity assumption, and the efficient convergence of our proposed low-rank mirror-prox variants.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2022

Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix Optimization Problems

Low-rank and nonsmooth matrix optimization problems capture many fundame...
research
12/03/2019

Linear Convergence of Frank-Wolfe for Rank-One Matrix Recovery Without Strong Convexity

We consider convex optimization problems which are widely used as convex...
research
12/18/2020

On the Efficient Implementation of the Matrix Exponentiated Gradient Algorithm for Low-Rank Matrix Optimization

Convex optimization over the spectrahedron, i.e., the set of all real n×...
research
01/31/2020

On the Convergence of Stochastic Gradient Descent with Low-Rank Projections for Convex Low-Rank Matrix Problems

We revisit the use of Stochastic Gradient Descent (SGD) for solving conv...
research
02/15/2023

Over-parametrization via Lifting for Low-rank Matrix Sensing: Conversion of Spurious Solutions to Strict Saddle Points

This paper studies the role of over-parametrization in solving non-conve...
research
11/28/2019

Analysis of Asymptotic Escape of Strict Saddle Sets in Manifold Optimization

In this paper, we provide some analysis on the asymptotic escape of stri...
research
02/22/2017

Sketchy Decisions: Convex Low-Rank Matrix Optimization with Optimal Storage

This paper concerns a fundamental class of convex matrix optimization pr...

Please sign up or login with your details

Forgot password? Click here to reset