Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks

05/19/2022
by   Blake Bordelon, et al.
0

We analyze feature learning in infinite width neural networks trained with gradient flow through a self-consistent dynamical field theory. We construct a collection of deterministic dynamical order parameters which are inner-product kernels for hidden unit activations and gradients in each layer at pairs of time points, providing a reduced description of network activity through training. These kernel order parameters collectively define the hidden layer activation distribution, the evolution of the neural tangent kernel, and consequently output predictions. For deep linear networks, these kernels satisfy a set of algebraic matrix equations. For nonlinear networks, we provide an alternating sampling procedure to self-consistently solve for the kernel order parameters. We provide comparisons of the self-consistent solution to various approximation schemes including the static NTK approximation, gradient independence assumption, and leading order perturbation theory, showing that each of these approximations can break down in regimes where general self-consistent solutions still provide an accurate description. Lastly, we provide experiments in more realistic settings which demonstrate that the loss and kernel dynamics of CNNs at fixed feature learning strength is preserved across different widths on a CIFAR classification task.

READ FULL TEXT

page 6

page 16

page 18

research
04/06/2023

Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks

We analyze the dynamics of finite width effects in wide but finite featu...
research
10/05/2022

The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks

It is unclear how changing the learning rule of a deep neural network al...
research
10/28/2022

A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer Neural Networks

To understand the training dynamics of neural networks (NNs), prior stud...
research
05/29/2021

Rapid Feature Evolution Accelerates Learning in Neural Networks

Neural network (NN) training and generalization in the infinite-width li...
research
06/25/2022

A Fast, Well-Founded Approximation to the Empirical Neural Tangent Kernel

Empirical neural tangent kernels (eNTKs) can provide a good understandin...
research
06/10/2021

Separation Results between Fixed-Kernel and Feature-Learning Probability Metrics

Several works in implicit and explicit generative modeling empirically o...
research
01/26/2023

Neural networks learn to magnify areas near decision boundaries

We study how training molds the Riemannian geometry induced by neural ne...

Please sign up or login with your details

Forgot password? Click here to reset