Step Size Matters in Deep Learning

05/22/2018
by   Kamil Nar, et al.
0

Training a neural network with the gradient descent algorithm gives rise to a discrete-time nonlinear dynamical system. Consequently, behaviors that are typically observed in these systems emerge during training, such as convergence to an orbit but not to a fixed point or dependence of convergence on the initialization. Step size of the algorithm plays a critical role in these behaviors: it determines the subset of the local optima that the algorithm can converge to, and it specifies the magnitude of the oscillations if the algorithm converges to an orbit. To elucidate the effects of the step size on training of neural networks, we study the gradient descent algorithm as a discrete-time dynamical system, and by analyzing the Lyapunov stability of different solutions, we show the relationship between the step size of the algorithm and the solutions that can be obtained with this algorithm. The results provide an explanation for several phenomena observed in practice, including the deterioration in the training error with increased depth, the hardness of estimating linear mappings with large singular values, and the distinct performance of deep residual networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2022

On Avoiding Local Minima Using Gradient Descent With Large Learning Rates

It has been widely observed in training of neural networks that when app...
research
10/18/2019

A Saddle-Point Dynamical System Approach for Robust Deep Learning

We propose a novel discrete-time dynamical system-based framework for ac...
research
03/22/2018

Residual Networks: Lyapunov Stability and Convex Decomposition

While training error of most deep neural networks degrades as the depth ...
research
05/22/2023

Gradient Descent Monotonically Decreases the Sharpness of Gradient Flow Solutions in Scalar Networks and Beyond

Recent research shows that when Gradient Descent (GD) is applied to neur...
research
08/17/2022

Dynamical softassign and adaptive parameter tuning for graph matching

This paper studies a framework, projected fixed-point method, for graph ...
research
02/24/2020

Interpolating Between Gradient Descent and Exponentiated Gradient Using Reparameterized Gradient Descent

Continuous-time mirror descent (CMD) can be seen as the limit case of th...
research
06/15/2023

MinMax Networks

While much progress has been achieved over the last decades in neuro-ins...

Please sign up or login with your details

Forgot password? Click here to reset