Implicit Regularization of Sub-Gradient Method in Robust Matrix Recovery: Don't be Afraid of Outliers

02/05/2021
by   Jianhao Ma, et al.
0

It is well-known that simple short-sighted algorithms, such as gradient descent, generalize well in the over-parameterized learning tasks, due to their implicit regularization. However, it is unknown whether the implicit regularization of these algorithms can be extended to robust learning tasks, where a subset of samples may be grossly corrupted with noise. In this work, we provide a positive answer to this question in the context of robust matrix recovery problem. In particular, we consider the problem of recovering a low-rank matrix from a number of linear measurements, where a subset of measurements are corrupted with large noise. We show that a simple sub-gradient method converges to the true low-rank solution efficiently, when it is applied to the over-parameterized l1-loss function without any explicit regularization or rank constraint. Moreover, by building upon a new notion of restricted isometry property, called sign-RIP, we prove the robustness of the sub-gradient method against outliers in the over-parameterized regime. In particular, we show that, with Gaussian measurements, the sub-gradient method is guaranteed to converge to the true low-rank solution, even if an arbitrary fraction of the measurements are grossly corrupted with noise.

READ FULL TEXT
research
02/17/2022

Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

In this work, we study the performance of sub-gradient method (SubGM) on...
research
06/16/2020

Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization

Recent advances have shown that implicit bias of gradient descent on ove...
research
07/16/2020

Understanding Implicit Regularization in Over-Parameterized Nonlinear Statistical Model

We study the implicit regularization phenomenon induced by simple optimi...
research
07/15/2022

Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution

This work characterizes the effect of depth on the optimization landscap...
research
02/28/2022

Robust Training under Label Noise by Over-parameterization

Recently, over-parameterized deep networks, with increasingly more netwo...
research
06/17/2021

Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix Recovery

We propose a new framework – Square Root Principal Component Pursuit – f...
research
01/27/2021

On the computational and statistical complexity of over-parameterized matrix sensing

We consider solving the low rank matrix sensing problem with Factorized ...

Please sign up or login with your details

Forgot password? Click here to reset