On the infinite width limit of neural networks with a standard parameterization

01/21/2020
by   Jascha Sohl-Dickstein, et al.
13

There are currently two parameterizations used to derive fixed kernels corresponding to infinite width neural networks, the NTK (Neural Tangent Kernel) parameterization and the naive standard parameterization. However, the extrapolation of both of these parameterizations to infinite width is problematic. The standard parameterization leads to a divergent neural tangent kernel while the NTK parameterization fails to capture crucial aspects of finite width networks such as: the dependence of training dynamics on relative layer widths, the relative training dynamics of weights and biases, and a nonstandard learning rate scale. Here we propose an improved extrapolation of the standard parameterization that preserves all of these properties as width is taken to infinity and yields a well-defined neural tangent kernel. We show experimentally that the resulting kernels typically achieve similar accuracy to those resulting from an NTK parameterization, but with better correspondence to the parameterization of typical finite width networks. Additionally, with careful tuning of width parameters, the improved standard parameterization kernels can outperform those stemming from an NTK parameterization. We release code implementing this improved standard parameterization as part of the Neural Tangents library at https://github.com/google/neural-tangents.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2020

Finite Versus Infinite Neural Networks: an Empirical Study

We perform a careful, thorough, and large scale empirical study of the c...
research
08/29/2022

Neural Tangent Kernel: A Survey

A seminal work [Jacot et al., 2018] demonstrated that training a neural ...
research
02/09/2019

Simulating extrapolated dynamics with parameterization networks

An artificial neural network architecture, parameterization networks, is...
research
06/17/2022

Fast Finite Width Neural Tangent Kernel

The Neural Tangent Kernel (NTK), defined as Θ_θ^f(x_1, x_2) = [∂ f(θ, x_...
research
06/13/2020

Collegial Ensembles

Modern neural network performance typically improves as model size incre...
research
06/11/2020

Dynamically Stable Infinite-Width Limits of Neural Classifiers

Recent research has been focused on two different approaches to studying...
research
07/31/2021

Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural Networks

Matrix completion problems arise in many applications including recommen...

Please sign up or login with your details

Forgot password? Click here to reset