Gradient Dynamics of Shallow Univariate ReLU Networks

06/18/2019
by   Francis Williams, et al.
3

We present a theoretical and empirical study of the gradient dynamics of overparameterized shallow ReLU networks with one-dimensional input, solving least-squares interpolation. We show that the gradient dynamics of such networks are determined by the gradient flow in a non-redundant parameterization of the network function. We examine the principal qualitative features of this gradient flow. In particular, we determine conditions for two learning regimes:kernel and adaptive, which depend both on the relative magnitude of initialization of weights in different layers and the asymptotic behavior of initialization coefficients in the limit of large network widths. We show that learning in the kernel regime yields smooth interpolants, minimizing curvature, and reduces to cubic splines for uniform initializations. Learning in the adaptive regime favors instead linear splines, where knots cluster adaptively at the sample points.

READ FULL TEXT
research
05/18/2022

On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias

We study the dynamics and implicit bias of gradient flow (GF) on univari...
research
07/28/2023

Noisy Interpolation Learning with Shallow Univariate ReLU Networks

We study the asymptotic overfitting behavior of interpolation with minim...
research
06/19/2019

Disentangling feature and lazy learning in deep neural networks: an empirical study

Two distinct limits for deep learning as the net width h→∞ have been pro...
research
09/27/2022

Magnitude and Angle Dynamics in Training Single ReLU Neurons

To understand learning the dynamics of deep ReLU networks, we investigat...
research
05/24/2022

Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width

Substantial work indicates that the dynamics of neural networks (NNs) is...
research
08/04/2020

Shallow Univariate ReLu Networks as Splines: Initialization, Loss Surface, Hessian, Gradient Flow Dynamics

Understanding the learning dynamics and inductive bias of neural network...
research
11/16/2022

On the symmetries in the dynamics of wide two-layer neural networks

We consider the idealized setting of gradient flow on the population ris...

Please sign up or login with your details

Forgot password? Click here to reset