Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent

05/18/2020
by   Tian Tong, et al.
0

Low-rank matrix estimation is a canonical problem that finds numerous applications in signal processing, machine learning and imaging science. A popular approach in practice is to factorize the matrix into two compact low-rank factors, and then seek to optimize these factors directly via simple iterative methods such as gradient descent and alternating minimization. Despite nonconvexity, recent literatures have shown that these simple heuristics in fact achieve linear convergence when initialized properly for a growing number of problems of interest. However, upon closer examination, existing approaches can still be computationally expensive especially for ill-conditioned matrices: the convergence rate of gradient descent depends linearly on the condition number of the low-rank matrix, while the per-iteration cost of alternating minimization is often prohibitive for large matrices. The goal of this paper is to set forth a new algorithmic approach dubbed Scaled Gradient Descent (ScaledGD) which can be viewed as pre-conditioned or diagonally-scaled gradient descent, where the pre-conditioners are adaptive and iteration-varying with a minimal computational overhead. For low-rank matrix sensing and robust principal component analysis, we theoretically show that ScaledGD achieves the best of both worlds: it converges linearly at a rate independent of the condition number similar as alternating minimization, while maintaining the low per-iteration cost of gradient descent. To the best of our knowledge, ScaledGD is the first algorithm that provably has such properties. At the core of our analysis is the introduction of a new distance function that takes account of the pre-conditioners when measuring the distance between the iterates and the ground truth.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2023

The Power of Preconditioning in Overparameterized Low-Rank Matrix Sensing

We propose , a preconditioned gradient descent method to tackle the low-...
research
01/13/2021

Beyond Procrustes: Balancing-Free Gradient Descent for Asymmetric Low-Rank Matrix Sensing

Low-rank matrix estimation plays a central role in various applications ...
research
06/18/2022

Fast and Provable Tensor Robust Principal Component Analysis via Scaled Gradient Descent

An increasing number of data science and machine learning problems rely ...
research
10/26/2020

Low-Rank Matrix Recovery with Scaled Subgradient Methods: Fast and Robust Convergence Without the Condition Number

Many problems in data science can be treated as estimating a low-rank ma...
research
02/23/2018

Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation

Low-rank modeling plays a pivotal role in signal processing and machine ...
research
11/15/2022

Stable rank-adaptive Dynamically Orthogonal Runge-Kutta schemes

We develop two new sets of stable, rank-adaptive Dynamically Orthogonal ...
research
06/03/2021

A Scalable Second Order Method for Ill-Conditioned Matrix Completion from Few Samples

We propose an iterative algorithm for low-rank matrix completion that ca...

Please sign up or login with your details

Forgot password? Click here to reset