Approximating Continuous Functions by ReLU Nets of Minimal Width

10/31/2017
by   Boris Hanin, et al.
0

This article concerns the expressive power of depth in deep feed-forward neural nets with ReLU activations. Specifically, we answer the following question: for a fixed d≥ 1, what is the minimal width w so that neural nets with ReLU activations, input dimension d, hidden layer widths at most w, and arbitrary depth can approximate any continuous function of d variables arbitrarily well. It turns out that this minimal width is exactly equal to d+1. That is, if all the hidden layer widths are bounded by d, then even in the infinite depth limit, ReLU nets can only express a very limited class of functions. On the other hand, we show that any continuous function on the d-dimensional unit cube can be approximated to arbitrary precision by ReLU nets in which all hidden layers have width exactly d+1. Our construction gives quantitative depth estimates for such an approximation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2017

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

This article concerns the expressive power of depth in neural nets with ...
research
02/28/2021

Optimal Approximation Rate of ReLU Networks in terms of Width and Depth

This paper concentrates on the approximation power of deep feed-forward ...
research
10/26/2021

Gradient representations in ReLU networks as similarity functions

Feed-forward networks can be interpreted as mappings with linear decisio...
research
06/13/2019

Deep Network Approximation Characterized by Number of Neurons

This paper quantitatively characterizes the approximation power of deep ...
research
01/11/2019

The Benefits of Over-parameterization at Initialization in Deep ReLU Networks

It has been noted in existing literature that over-parameterization in R...
research
10/03/2019

A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case

A key element of understanding the efficacy of overparameterized neural ...
research
10/20/2022

Global Convergence of SGD On Two Layer Neural Nets

In this note we demonstrate provable convergence of SGD to the global mi...

Please sign up or login with your details

Forgot password? Click here to reset