Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks

09/23/2018 ∙ by Ohad Shamir, et al. ∙ 0

In this note, we study the dynamics of gradient descent on objective functions of the form f(∏_i=1^k w_i) (with respect to scalar parameters w_1,...,w_k), which arise in the context of training depth-k linear neural networks. We prove that for standard random initializations, and under mild assumptions on f, the number of iterations required for convergence scales exponentially with the depth k. This highlights a potential obstacle in understanding the convergence of gradient-based methods for deep linear neural networks, where k is large.



page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.