Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent

02/18/2019
by   Jaehoon Lee, et al.
0

A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.

READ FULL TEXT
research
12/30/2019

Disentangling trainability and generalization in deep learning

A fundamental goal in deep learning is the characterization of trainabil...
research
11/29/2022

Infinite-width limit of deep linear neural networks

This paper studies the infinite-width limit of deep linear neural networ...
research
03/27/2020

On the Optimization Dynamics of Wide Hypernetworks

Recent results in the theoretical study of deep learning have shown that...
research
09/08/2023

Connecting NTK and NNGP: A Unified Theoretical Framework for Neural Network Learning Dynamics in the Kernel Regime

Artificial neural networks have revolutionized machine learning in recen...
research
09/25/2019

Asymptotics of Wide Networks from Feynman Diagrams

Understanding the asymptotic behavior of wide networks is of considerabl...
research
07/01/2021

Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks

We analyze the learning dynamics of infinitely wide neural networks with...
research
10/27/2020

A Bayesian Perspective on Training Speed and Model Selection

We take a Bayesian perspective to illustrate a connection between traini...

Please sign up or login with your details

Forgot password? Click here to reset