Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization

06/16/2020
by   Chong You, et al.
6

Recent advances have shown that implicit bias of gradient descent on over-parameterized models enables the recovery of low-rank matrices from linear measurements, even with no prior knowledge on the intrinsic rank. In contrast, for robust low-rank matrix recovery from grossly corrupted measurements, over-parameterization leads to overfitting without prior knowledge on both the intrinsic rank and sparsity of corruption. This paper shows that with a double over-parameterization for both the low-rank matrix and sparse corruption, gradient descent with discrepant learning rates provably recovers the underlying matrix even without prior knowledge on neither rank of the matrix nor sparsity of the corruption. We further extend our approach for the robust recovery of natural images by over-parameterizing images with deep convolutional networks. Experiments show that our method handles different test images and varying corruption levels with a single learning pipeline where the network width and termination conditions do not need to be adjusted on a case-by-case basis. Underlying the success is again the implicit bias with discrepant learning rates on different over-parameterized parameters, which may bear on broader applications.

READ FULL TEXT

page 7

page 8

page 18

research
02/05/2021

Implicit Regularization of Sub-Gradient Method in Robust Matrix Recovery: Don't be Afraid of Outliers

It is well-known that simple short-sighted algorithms, such as gradient ...
research
09/23/2021

Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact Recovery

We study the robust recovery of a low-rank matrix from sparsely and gros...
research
02/07/2022

Noise Regularizes Over-parameterized Rank One Matrix Recovery, Provably

We investigate the role of noise in optimization algorithms for learning...
research
02/28/2022

Robust Training under Label Noise by Over-parameterization

Recently, over-parameterized deep networks, with increasingly more netwo...
research
05/26/2019

On Learning Over-parameterized Neural Networks: A Functional Approximation Prospective

We consider training over-parameterized two-layer neural networks with R...
research
02/17/2022

Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

In this work, we study the performance of sub-gradient method (SubGM) on...
research
10/31/2019

Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators

Convolutional Neural Networks (CNNs) have emerged as highly successful t...

Please sign up or login with your details

Forgot password? Click here to reset