On the effect of the activation function on the distribution of hidden nodes in a deep network

01/07/2019
by   Philip M. Long, et al.
0

We analyze the joint probability distribution on the lengths of the vectors of hidden variables in different layers of a fully connected deep network, when the weights and biases are chosen randomly according to Gaussian distributions, and the input is in { -1, 1}^N. We show that, if the activation function ϕ satisfies a minimal set of assumptions, satisfied by all activation functions that we know that are used in practice, then, as the width of the network gets large, the `length process' converges in probability to a length map that is determined as a simple function of the variances of the random weights and biases, and the activation function ϕ. We also show that this convergence may fail for ϕ that violate our assumptions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2020

Analysis of the rate of convergence of fully connected deep neural network regression estimates with smooth activation function

This article contributes to the current statistical theory of deep neura...
research
02/13/2014

Zero-bias autoencoders and the benefits of co-adapting features

Regularized training of an autoencoder typically results in hidden unit ...
research
04/26/2018

From Principal Subspaces to Principal Components with Linear Autoencoders

The autoencoder is an effective unsupervised learning model which is wid...
research
03/29/2021

Restricted Boltzmann Machines as Models of Interacting Variables

We study the type of distributions that Restricted Boltzmann Machines (R...
research
03/29/2023

An Over-parameterized Exponential Regression

Over the past few years, there has been a significant amount of research...
research
11/14/2020

AutoRWN: automatic construction and training of random weight networks using competitive swarm of agents

Random Weight Networks have been extensively used in many applications i...
research
11/04/2020

Kernel Dependence Network

We propose a greedy strategy to spectrally train a deep network for mult...

Please sign up or login with your details

Forgot password? Click here to reset