Transition to Linearity of General Neural Networks with Directed Acyclic Graph Architecture

05/24/2022
by   Libin Zhu, et al.
0

In this paper we show that feedforward neural networks corresponding to arbitrary directed acyclic graphs undergo transition to linearity as their "width" approaches infinity. The width of these general networks is characterized by the minimum in-degree of their neurons, except for the input and first layers. Our results identify the mathematical structure underlying transition to linearity and generalize a number of recent works aimed at characterizing transition to linearity or constancy of the Neural Tangent Kernel for standard architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/06/2017

Directed Graph Embeddings

Definitions of graph embeddings and graph minors for directed graphs are...
research
10/16/2019

Structural Analysis of Sparse Neural Networks

Sparse Neural Networks regained attention due to their potential for mat...
research
06/30/2022

A note on Linear Bottleneck networks and their Transition to Multilinearity

Randomly initialized wide neural networks transition to linear functions...
research
10/02/2020

On the linearity of large non-linear models: when and why the tangent kernel is constant

The goal of this work is to shed light on the remarkable phenomenon of t...
research
03/10/2022

Transition to Linearity of Wide Neural Networks is an Emerging Property of Assembling Weak Models

Wide neural networks with linear output layer have been shown to be near...
research
08/29/2022

Neural Tangent Kernel: A Survey

A seminal work [Jacot et al., 2018] demonstrated that training a neural ...

Please sign up or login with your details

Forgot password? Click here to reset