The power of deeper networks for expressing natural functions

05/16/2017
by   David Rolnick, et al.
0

It is well-known that neural networks are universal approximators, but that deeper networks tend to be much more efficient than shallow ones. We shed light on this by proving that the total number of neurons m required to approximate natural classes of multivariate polynomials of n variables grows only linearly with n for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from 1 to k, the neuron requirement grows exponentially not with n but with n^1/k, suggesting that the minimum number of layers required for computational tractability grows only logarithmically with n.

READ FULL TEXT
research
01/25/2019

Complexity of Linear Regions in Deep Networks

It is well-known that the expressivity of a neural network depends on it...
research
08/24/2017

On the Compressive Power of Deep Rectifier Networks for High Resolution Representation of Class Boundaries

This paper provides a theoretical justification of the superior classifi...
research
11/13/2019

The Number of Threshold Words on n Letters Grows Exponentially for Every n≥ 27

For every n≥ 27, we show that the number of n/(n-1)^+-free words (i.e., ...
research
03/30/2017

From Deep to Shallow: Transformations of Deep Rectifier Networks

In this paper, we introduce transformations of deep rectifier networks, ...
research
07/19/2019

Representational Capacity of Deep Neural Networks -- A Computing Study

There is some theoretical evidence that deep neural networks with multip...
research
09/06/2018

Applying Deep Learning to Derivatives Valuation

The universal approximation theorem of artificial neural networks states...
research
08/29/2016

Why does deep and cheap learning work so well?

We show how the success of deep learning could depend not only on mathem...

Please sign up or login with your details

Forgot password? Click here to reset