Log In Sign Up

Tensor Programs II: Neural Tangent Kernel for Any Architecture

by   Greg Yang, et al.

We prove that a randomly initialized neural network of *any architecture* has its Tangent Kernel (NTK) converge to a deterministic limit, as the network widths tend to infinity. We demonstrate how to calculate this limit. In prior literature, the heuristic study of neural network gradients often assumes every weight matrix used in forward propagation is independent from its transpose used in backpropagation (Schoenholz et al. 2017). This is known as the *gradient independence assumption (GIA)*. We identify a commonly satisfied condition, which we call *Simple GIA Check*, such that the NTK limit calculation based on GIA is correct. Conversely, when Simple GIA Check fails, we show GIA can result in wrong answers. Our material here presents the NTK results of Yang (2019a) in a friendly manner and showcases the *tensor programs* technique for understanding wide neural networks. We provide reference implementations of infinite-width NTKs of recurrent neural network, transformer, and batch normalization at


page 7

page 23


Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes

Wide neural networks with random weights and biases are Gaussian process...

Tensor Programs III: Neural Matrix Laws

In a neural network (NN), weight matrices linearly transform inputs into...

Feature Learning in Infinite-Width Neural Networks

As its width tends to infinity, a deep neural network's behavior under g...

Tensor Programs IIb: Architectural Universality of Neural Tangent Kernel Training Dynamics

Yang (2020a) recently showed that the Neural Tangent Kernel (NTK) at ini...

Fast Finite Width Neural Tangent Kernel

The Neural Tangent Kernel (NTK), defined as Θ_θ^f(x_1, x_2) = [∂ f(θ, x_...

On Exact Computation with an Infinitely Wide Neural Net

How well does a classic deep net architecture like AlexNet or VGG19 clas...