Width Provably Matters in Optimization for Deep Linear Neural Networks

01/24/2019
by   Simon S. Du, et al.
0

We prove that for an L-layer fully-connected linear neural network, if the width of every hidden layer is Ω̃ (L · r · d_out·κ^3 ), where r and κ are the rank and the condition number of the input data, and d_out is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate. The number of iterations to find an ϵ-suboptimal solution is O(κ(1/ϵ)). Our polynomial upper bound on the total running time for wide deep linear networks and the (Ω(L)) lower bound for narrow deep linear neural networks [Shamir, 2018] together demonstrate that wide layers are necessary for optimizing deep models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset