Neural Tangent Kernel: A Survey

08/29/2022
by   Eugene Golikov, et al.
6

A seminal work [Jacot et al., 2018] demonstrated that training a neural network under specific parameterization is equivalent to performing a particular kernel method as width goes to infinity. This equivalence opened a promising direction for applying the results of the rich literature on kernel methods to neural nets which were much harder to tackle. The present survey covers key results on kernel convergence as width goes to infinity, finite-width corrections, applications, and a discussion of the limitations of the corresponding method.

READ FULL TEXT

page 25

page 26

research
01/21/2020

On the infinite width limit of neural networks with a standard parameterization

There are currently two parameterizations used to derive fixed kernels c...
research
02/14/2020

Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? – A Neural Tangent Kernel Perspective

Deep residual networks (ResNets) have demonstrated better generalization...
research
06/16/2022

Neural tangent kernel analysis of shallow α-Stable ReLU neural networks

There is a recent literature on large-width properties of Gaussian neura...
research
02/01/2022

Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization

Neural Tangent Kernel (NTK) is widely used to analyze overparametrized n...
research
04/26/2019

On Exact Computation with an Infinitely Wide Neural Net

How well does a classic deep net architecture like AlexNet or VGG19 clas...
research
05/24/2022

Transition to Linearity of General Neural Networks with Directed Acyclic Graph Architecture

In this paper we show that feedforward neural networks corresponding to ...
research
06/20/2022

Limitations of the NTK for Understanding Generalization in Deep Learning

The “Neural Tangent Kernel” (NTK) (Jacot et al 2018), and its empirical ...

Please sign up or login with your details

Forgot password? Click here to reset