A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements

06/19/2015
by   Qinqing Zheng, et al.
0

We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(r^3 κ^2 n n) random measurements of a positive semidefinite n × n matrix of rank r and condition number κ, our method is guaranteed to converge linearly to the global optimum.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2016

Convergence Analysis for Rectangular Matrix Completion Using Burer-Monteiro Factorization and Gradient Descent

We address the rectangular matrix completion problem by lifting the unkn...
research
06/04/2018

Solving Systems of Quadratic Equations via Exponential-type Gradient Descent Algorithm

We consider the rank minimization problem from quadratic measurements, i...
research
07/05/2022

Improved Global Guarantees for the Nonconvex Burer–Monteiro Factorization via Rank Overparameterization

We consider minimizing a twice-differentiable, L-smooth, and μ-strongly ...
research
02/14/2019

Solving Complex Quadratic Systems with Full-Rank Random Matrices

We tackle the problem of recovering a complex signal x∈C^n from quadrat...
research
05/19/2017

A lower bound on the positive semidefinite rank of convex bodies

The positive semidefinite rank of a convex body C is the size of its sma...
research
05/28/2021

STRIDE along Spectrahedral Vertices for Solving Large-Scale Rank-One Semidefinite Relaxations

We consider solving high-order semidefinite programming (SDP) relaxation...
research
06/25/2015

The local convexity of solving systems of quadratic equations

This paper considers the recovery of a rank r positive semidefinite matr...

Please sign up or login with your details

Forgot password? Click here to reset