Knots in random neural networks

11/27/2018
by   Kevin K. Chen, et al.
0

The weights of a neural network are typically initialized at random, and one can think of the functions produced by such a network as having been generated by a prior over some function space. Studying random networks, then, is useful for a Bayesian understanding of the network evolution in early stages of training. In particular, one can investigate why neural networks with huge numbers of parameters do not immediately overfit. We analyze the properties of random scalar-input feed-forward rectified linear unit architectures, which are random linear splines. With weights and biases sampled from certain common distributions, empirical tests show that the number of knots in the spline produced by the network is equal to the number of neurons, to very close approximation. We describe our progress towards a completely analytic explanation of this phenomenon. In particular, we show that random single-layer neural networks are equivalent to integrated random walks with variable step sizes. That each neuron produces one knot on average is equivalent to the associated integrated random walk having one zero crossing on average. We explore how properties of the integrated random walk, including the step sizes and initial conditions, affect the number of crossings. The number of knots in random neural networks can be related to the behavior of extreme learning machines, but it also establishes a prior preventing optimizers from immediately overfitting to noisy training data.

READ FULL TEXT
research
09/07/2021

On the space of coefficients of a Feed Forward Neural Network

We define and establish the conditions for `equivalent neural networks' ...
research
12/10/2019

Almost Uniform Sampling From Neural Networks

Given a length n sample from R^d and a neural network with a fixed archi...
research
06/09/2023

Deterministic equivalent of the Conjugate Kernel matrix associated to Artificial Neural Networks

We study the Conjugate Kernel associated to a multi-layer linear-width f...
research
06/22/2018

PCA of high dimensional random walks with comparison to neural network training

One technique to visualize the training of neural networks is to perform...
research
01/18/2021

Consistency of random-walk based network embedding algorithms

Random-walk based network embedding algorithms like node2vec and DeepWal...
research
12/19/2014

Random Walk Initialization for Training Very Deep Feedforward Networks

Training very deep networks is an important open problem in machine lear...
research
06/21/2020

Affine Symmetries and Neural Network Identifiability

We address the following question of neural network identifiability: Sup...

Please sign up or login with your details

Forgot password? Click here to reset