Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution

07/15/2022
by   Jianhao Ma, et al.
0

This work characterizes the effect of depth on the optimization landscape of linear regression, showing that, despite their nonconvexity, deeper models have more desirable optimization landscape. We consider a robust and over-parameterized setting, where a subset of measurements are grossly corrupted with noise and the true linear model is captured via an N-layer linear neural network. On the negative side, we show that this problem does not have a benign landscape: given any N≥ 1, with constant probability, there exists a solution corresponding to the ground truth that is neither local nor global minimum. However, on the positive side, we prove that, for any N-layer model with N≥ 2, a simple sub-gradient method becomes oblivious to such “problematic” solutions; instead, it converges to a balanced solution that is not only close to the ground truth but also enjoys a flat local landscape, thereby eschewing the need for "early stopping". Lastly, we empirically verify that the desirable optimization landscape of deeper models extends to other robust learning tasks, including deep matrix recovery and deep ReLU networks with ℓ_1-loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2021

Sharp Restricted Isometry Property Bounds for Low-rank Matrix Recovery Problems with Corrupted Measurements

In this paper, we study a general low-rank matrix recovery problem with ...
research
02/05/2021

Implicit Regularization of Sub-Gradient Method in Robust Matrix Recovery: Don't be Afraid of Outliers

It is well-known that simple short-sighted algorithms, such as gradient ...
research
02/21/2023

On the Optimization Landscape of Burer-Monteiro Factorization: When do Global Solutions Correspond to Ground Truth?

In low-rank matrix recovery, the goal is to recover a low-rank matrix, g...
research
08/17/2020

Learning Two-Layer Residual Networks with Nonparametric Function Estimation by Convex Programming

We focus on learning a two-layer residual neural network with preactivat...
research
02/17/2022

Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

In this work, we study the performance of sub-gradient method (SubGM) on...
research
11/01/2017

Learning One-hidden-layer Neural Networks with Landscape Design

We consider the problem of learning a one-hidden-layer neural network: w...
research
02/14/2013

An analysis of NK and generalized NK landscapes

Simulated landscapes have been used for decades to evaluate search strat...

Please sign up or login with your details

Forgot password? Click here to reset