Neural Tangent Kernel Analysis of Deep Narrow Neural Networks

02/07/2022
by   Jongmin Lee, et al.
0

The tremendous recent progress in analyzing the training dynamics of overparameterized neural networks has primarily focused on wide networks and therefore does not sufficiently address the role of depth in deep learning. In this work, we present the first trainability guarantee of infinitely deep but narrow neural networks. We study the infinite-depth limit of a multilayer perceptron (MLP) with a specific initialization and establish a trainability guarantee using the NTK theory. We then extend the analysis to an infinitely deep convolutional neural network (CNN) and perform brief experiments

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2020

Analyzing Finite Neural Networks: Can We Trust Neural Tangent Kernel Theory?

Neural Tangent Kernel (NTK) theory is widely used to study the dynamics ...
research
07/09/2017

Deepest Neural Networks

This paper shows that a long chain of perceptrons (that is, a multilayer...
research
06/16/2020

A Note on the Global Convergence of Multilayer Neural Networks in the Mean Field Regime

In a recent work, we introduced a rigorous framework to describe the mea...
research
09/28/2021

Convergence of Deep Convolutional Neural Networks

Convergence of deep neural networks as the depth of the networks tends t...
research
08/15/2018

Collapse of Deep and Narrow Neural Nets

Recent theoretical work has demonstrated that deep neural networks have ...
research
07/24/2019

A Fine-Grained Spectral Perspective on Neural Networks

Are neural networks biased toward simple functions? Does depth always he...
research
05/03/2018

How deep should be the depth of convolutional neural networks: a backyard dog case study

We present a straightforward non-iterative method for shallowing of deep...

Please sign up or login with your details

Forgot password? Click here to reset