Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

02/17/2022
by   Jianhao Ma, et al.
0

In this work, we study the performance of sub-gradient method (SubGM) on a natural nonconvex and nonsmooth formulation of low-rank matrix recovery with ℓ_1-loss, where the goal is to recover a low-rank matrix from a limited number of measurements, a subset of which may be grossly corrupted with noise. We study a scenario where the rank of the true solution is unknown and over-estimated instead. The over-estimation of the rank gives rise to an over-parameterized model in which there are more degrees of freedom than needed. Such over-parameterization may lead to overfitting, or adversely affect the performance of the algorithm. We prove that a simple SubGM with small initialization is agnostic to both over-parameterization and noise in the measurements. In particular, we show that small initialization nullifies the effect of over-parameterization on the performance of SubGM, leading to an exponential improvement in its convergence rate. Moreover, we provide the first unifying framework for analyzing the behavior of SubGM under both outlier and Gaussian noise models, showing that SubGM converges to the true solution, even under arbitrarily large and arbitrarily dense noise values, and–perhaps surprisingly–even if the globally optimal solutions do not correspond to the ground truth. At the core of our results is a robust variant of restricted isometry property, called Sign-RIP, which controls the deviation of the sub-differential of the ℓ_1-loss from that of an ideal, expected loss. As a byproduct of our results, we consider a subclass of robust low-rank matrix recovery with Gaussian measurements, and show that the number of required samples to guarantee the global convergence of SubGM is independent of the over-parameterized rank.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2023

On the Optimization Landscape of Burer-Monteiro Factorization: When do Global Solutions Correspond to Ground Truth?

In low-rank matrix recovery, the goal is to recover a low-rank matrix, g...
research
02/05/2021

Implicit Regularization of Sub-Gradient Method in Robust Matrix Recovery: Don't be Afraid of Outliers

It is well-known that simple short-sighted algorithms, such as gradient ...
research
09/23/2021

Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact Recovery

We study the robust recovery of a low-rank matrix from sparsely and gros...
research
01/27/2021

On the computational and statistical complexity of over-parameterized matrix sensing

We consider solving the low rank matrix sensing problem with Factorized ...
research
07/20/2022

Alternating minimization for generalized rank one matrix sensing: Sharp predictions from a random initialization

We consider the problem of estimating the factors of a rank-1 matrix wit...
research
07/15/2022

Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution

This work characterizes the effect of depth on the optimization landscap...
research
06/16/2020

Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization

Recent advances have shown that implicit bias of gradient descent on ove...

Please sign up or login with your details

Forgot password? Click here to reset