Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth

06/07/2020
by   Guy Bresler, et al.
0

We prove sharp dimension-free representation results for neural networks with D ReLU layers under square loss for a class of functions G_D defined in the paper. These results capture the precise benefits of depth in the following sense: 1. The rates for representing the class of functions G_D via D ReLU layers is sharp up to constants, as shown by matching lower bounds. 2. For each D, G_D⊆G_D+1 and as D grows the class of functions G_D contains progressively less smooth functions. 3. If D^' < D, then the approximation rate for the class G_D achieved by depth D^' networks is strictly worse than that achieved by depth D networks. This constitutes a fine-grained characterization of the representation power of feedforward networks of arbitrary depth D and number of neurons N, in contrast to existing representation results which either require D growing quickly with N or assume that the function being represented is highly smooth. In the latter case similar rates can be obtained with a single nonlinear layer. Our results confirm the prevailing hypothesis that deeper networks are better at representing less smooth functions, and indeed, the main technical novelty is to fully exploit the fact that deep networks can produce highly oscillatory functions with few activation functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2021

Sharp Lower Bounds on the Approximation Rate of Shallow Neural Networks

We consider the approximation rates of shallow neural networks with resp...
research
01/09/2020

Deep Network Approximation for Smooth Functions

This paper establishes optimal approximation error characterization of d...
research
05/31/2021

Towards Lower Bounds on the Depth of ReLU Neural Networks

We contribute to a better understanding of the class of functions that i...
research
09/15/2017

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

We study the necessary and sufficient complexity of ReLU neural networks...
research
02/03/2021

On the Approximation Power of Two-Layer Networks of Random ReLUs

This paper considers the following question: how well can depth-two ReLU...
research
02/01/2020

A Corrective View of Neural Networks: Representation, Memorization and Learning

We develop a corrective mechanism for neural network approximation: the ...
research
05/31/2021

Representation Learning Beyond Linear Prediction Functions

Recent papers on the theory of representation learning has shown the imp...

Please sign up or login with your details

Forgot password? Click here to reset