The effect of the choice of neural network depth and breadth on the size of its hypothesis space

06/06/2018
by   Lech Szymanski, et al.
0

We show that the number of unique function mappings in a neural network hypothesis space is inversely proportional to ∏_lU_l!, where U_l is the number of neurons in the hidden layer l.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2019

An Algorithm for Approximating Continuous Functions on Compact Subsets with a Neural Network with one Hidden Layer

George Cybenko's landmark 1989 paper showed that there exists a feedforw...
research
07/06/2021

Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons

This paper develops simple feed-forward neural networks that achieve the...
research
02/13/2017

Training Neural Networks Based on Imperialist Competitive Algorithm for Predicting Earthquake Intensity

In this study we determined neural network weights and biases by Imperia...
research
05/03/2015

Some Theoretical Properties of a Network of Discretely Firing Neurons

The problem of optimising a network of discretely firing neurons is addr...
research
09/02/2023

Spectral Barron space and deep neural network approximation

We prove the sharp embedding between the spectral Barron space and the B...
research
06/22/2020

Logarithmic Pruning is All You Need

The Lottery Ticket Hypothesis is a conjecture that every large neural ne...
research
04/14/2005

An Evolving Cascade Neural Network Technique for Cleaning Sleep Electroencephalograms

Evolving Cascade Neural Networks (ECNNs) and a new training algorithm ca...

Please sign up or login with your details

Forgot password? Click here to reset