# Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width w_min(d) so that ReLU nets of width w_min(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0,1]^d aribitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well-suited for representing convex functions. In particular, we prove that ReLU nets with width d+1 can approximate any continuous convex function of d variables arbitrarily well. Moreover, when approximating convex, piecewise affine functions by such nets, we obtain matching upper and lower bounds on the required depth, proving that our construction is essentially optimal. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [0,1]^d by ReLU nets with width d+3.

10/31/2017

06/22/2020

### Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth

A new network with super approximation power is introduced. This network...
06/13/2019

### Deep Network Approximation Characterized by Number of Neurons

This paper quantitatively characterizes the approximation power of deep ...
06/05/2018

### The universal approximation power of finite-width deep ReLU networks

We show that finite-width deep ReLU neural networks yield rate-distortio...
09/26/2018

### Deep Neural Networks for Estimation and Inference: Application to Causal Effects and Other Semiparametric Estimands

We study deep neural networks and their use in semiparametric inference....
06/21/2019

### Universal Approximation of Input-Output Maps by Temporal Convolutional Nets

There has been a recent shift in sequence-to-sequence modeling from recu...
02/28/2017

### Deep Semi-Random Features for Nonlinear Function Approximation

We propose semi-random features for nonlinear function approximation. Th...

## 1. Introduction

Over the past several years, neural nets particularly deep nets

have become the state of the art in a remarkable number of machine learning problems, from mastering Go to image recognition/segmentation and machine translation (see the review article

[2] for more background). Despite all their practical successes, a robust theory of why they work so well is in its infancy. Much of the work to date has focused on the problem of explaining and quantifying the expressivity the ability to approximate a rich class of functions of deep neural nets [1, 8, 9, 11, 12, 13, 15, 16, 17, 18]. Expressivity can be seen both as an effect of both depth and width. It has been known since at least the work of Cybenko [4] and Hornik-Stinchcombe-White [7] that if no constraint is placed on the width of a hidden layer, then a single hidden layer is enough to approximate essentially any function. The purpose of this article, in contrast, is to investigate the “effect of depth without the aid of width.” More precisely, for each we would like to estimate

 wmin(d):=min{w∈N∣∣∣ReLUnets of width w can approximate anypositive continuous function on [0,1]d arbitrarily well}. (1)

In Theorem 1, we prove that This raises two questions:

1. Is the estimate in the previous line sharp?

2. How efficiently can nets of a given width approximate a given continuous function of variables?

On the subject of Q1, we will prove in forthcoming work with M. Sellke [6] that in fact When , the lower bound is simple to check, and the upper bound follows for example from Theorem 3.1 in [11]. The main results in this article, however, concern Q1 and Q2 for convex functions. For instance, we prove in Theorem 1 that

 wconvmin(d)≤d+1, (2)

where

 (3)

This illustrates a central point of the present paper: the convexity of the activation makes nets well-adapted to representing convex functions on

Theorem 1 also addresses Q2 by providing quantitative estimates on the depth of a net with width that approximates a given convex function. We provide similar depth estimates for arbitrary continuous functions on but this time for nets of width Several of our depth estimates are based on the work of Balázs-György-Szepesvári [3] on max-affine estimators in convex regression.

In order to prove Theorem 1, we must understand what functions can be exactly computed by a net. Such functions are always piecewise affine, and we prove in Theorem 2 the converse: every piecewise affine function on can be exactly represented by a net with hidden layer width at most . Moreover, we prove that the depth of the network that computes such a function is bounded by the number affine pieces it contains. This extends the results of Arora-Basu-Mianjy-Mukherjee (e.g. Theorem 2.1 and Corollary 2.2 in [1]).

Convex functions again play a special role. We show that every convex function on that is piecewise affine with pieces can be represented exactly by a net with width and depth

## 2. Statement of Results

To state our results precisely, we set notation and recall several definitions. For and a continuous function write

 ∥f∥C0:=supx∈[0,1]d|f(x)|.

Further, denote by

 ωf(ε):=sup{|f(x)−f(y)|||x−y|≤ε}

the modulus of continuity of whose value at is the maximum changes when its argument moves by at most Note that by definition of a continuous function, as Next, given and we define a feed-forward neural net with ReLU activations, input dimension , hidden layer width , depth and output dimension to be any member of the finite-dimensional family of functions

 ReLU∘An∘⋯∘ReLU∘A1∘ReLU∘A1 (4)

that map to In (4),

are affine transformations, and for every

 ReLU(x1,…,xm)=(max{0,x1},…,max{0,xm}).

We often denote such a net by and write

 fN(x):=ReLU∘An∘⋯∘ReLU∘A1∘ReLU∘A1(x)

for the function it computes. Our first result contrasts both the width and depth required to approximate continuous, convex, and smooth functions by nets.

###### Theorem 1.

Let and be a positive function with . We have the following three cases:

1. ( is continuous):

There exists a sequence of feed-forward neural nets with ReLU activations, input dimension hidden layer width output dimension such that

 limk→∞∥∥f−fNk∥∥C0=0. (5)

In particular, Moreover, write for the modulus of continuity of and fix There exists a feed-forward neural nets with ReLU activations, input dimension hidden layer width output dimension and

 depth(Nε)=2⋅d!ωf(ε)d (6)

such that

 ∥∥f−fNε∥∥C0≤ε. (7)
2. ( is convex):

There exists a sequence of feed-forward neural nets with ReLU activations, input dimension hidden layer width and output dimension such that

 limk→∞∥∥f−fNk∥∥C0=0. (8)

Hence, Further, there exists such that if is both convex and Lipschitz with Lipschitz constant then the nets in (8) can be taken to satisfy

 depth(Nk)=k+1,∥∥f−fNk∥∥C0≤CLd3/2k−2/d. (9)
3. ( is smooth):

There exists a constant depending only on and a constant depending only on the maximum of the first derivative of such that for every the width nets in (5) can be chosen so that

 (10)

The main novelty of Theorem 1 is the width estimate and the quantitative depth estimates (9) for convex functions as well as the analogous estimates (6) and (7) for continuous functions. Let us breifly explain the origin of the other estimates. The relation (5) and the corresponding estimate are a combination of the well-known fact that nets with one hidden layer can approximate any continuous function and a simple procedure by which a net with input dimension and a single hidden layer of width can be replaced by another net that computes the same function but has depth and width For these width nets, we are unaware of how to obtain quantitative estimates on the depth required to approximate a fixed continuous function to a given precision. At the expense of changing the width of our nets from to however, we furnish the estimates (6) and (7). On the other hand, using Theorem 3.1 in [11], when is sufficiently smooth, we obtain the depth estimates (10) for width nets.

Our next result concerns the exact representation of piecewise affine functions by nets. Instead of measuring the complexity of a such a function by its Lipschitz constant or modulus of continuity, the complexity of a piecewise affine function can be thought of as the minimal number of affine pieces needed to define it.

###### Theorem 2.

Let and be the function computed by some net with input dimension , output dimension and arbitrary width. There exist affine functions such that can be written as the difference of positive convex functions:

 f=g−h,g:=max1≤α≤Ngα,h:=max1≤β≤Mhβ. (11)

Moreover, there exists a feed-forward neural net with ReLU activations, input dimension hidden layer width output dimension and

 depth(N)=2(M+N) (12)

that computes exactly. Finally, if is convex (and hence vanishes), then the width of can be taken to be and the depth can be taken to

The fact that the function computed by a net can be written as (11) follows from Theorem 2.1 in [1]. The novelty in Theorem 2 is therefore the uniform width estimate in the representation on any function computed by a net and the width estimate for convex functions. Theorem 2 will be used in the proof of Theorem 1.

## 3. Relation to Previous Work

1. Theorems 1-2 are “deep and narrow” analogs of the well-known “shallow and wide” universal approximation results (e.g. Cybenko [4] and Hornik- Stinchcombe -White [7]) for feed-forward neural nets. Those articles show that essentially any scalar function on the dimensional unit cube can be arbitrarily well-approximated by a feed-forward neural net with a single hidden layer with arbitrary width. Such results hold for a wide class of nonlinear activations but are not particularly illuminating from the point of understanding the expressive advantages of depth in neural nets.

2. The results in this article complement the work of Liao-Mhaskar-Poggio [8] and Mhaskar-Poggio [11], who consider the advantages of depth for representing certain heirarchical or compositional functions by neural nets with both and non- activations. Their results (e.g. Theorem 1 in [8] and Theorem 3.1 in [11]) give bounds on the width for approximation both for shallow and certain deep heirarchical nets.

3. Theorems 1-2 are also quantitative analogs of Corollary 2.2 and Theorem 2.4 in the work of Arora-Basu-Mianjy-Mukerjee [1]. Their results give bounds on the depth of a net needed to compute exactly a piecewise linear function of variables. However, except when

they do not obtain an estimate on the number of neurons in such a network and hence cannot bound the width of the hidden layers.

4. Our results are related to Theorems II.1 and II.4 of Rolnick-Tegmark [14], which are themselves extensions of Lin-Rolnick-Tegmark [9]. Their results give lower bounds on the total size (number of neurons) of a neural net (with non- activations) that approximates sparse multivariable polynomials. Their bounds do not imply a control on the width of such networks that depends only on the number of variables, however.

5. This work was inpsired in part by questions raised in the work of Telgarsky [15, 16, 17]. In particular, in Theorems 1.1 and 1.2 of [15], Telgarsky constructs interesting examples of sawthooth functions that can be computed efficiently by deep width nets that cannot be well-approximated by shallower networks with a simlar number of parameters.

6. Theorems 1-2 are quantitative statements about the expressive power of depth without the aid of width. This topic, usually without considering bounds on the width, has been taken up by many authors. We refer the reader to [12, 13] for several interesting quantitative measures of the complexity of functions computed by deep neural nets.

7. Finally, we refer the reader to the interesting work of Yarofsky [18], which provides bounds on the total number of parameters in a net needed to approximate a given class of functions (mainly balls in various Sobolev spaces).

## 4. Acknowledgements

It is a pleasure to thank Elchanan Mossel and Leonid Hanin for many helpful discussions. This paper originated while I attended EM’s class on deep learning

[10]. In particular, I would like to thank him for suggesting proving quantitative bounds in Theorem 2 and for suggesting that a lower bound can be obtained by taking piece-wise linear functions with many different directions. He also pointed out that the width estimates for continuous function in Theorem 1 where sub-optimal in a previous draft. l would also like to thank Leonid Hanin for detailed comments on a several previous drafts and for useful references to results in approximation theory. I am also grateful to Brandon Rule and Matus Telgarsky for comments on an earlier version of this article. I am also grateful to BR for the original suggestion to investigate the expressivity of neural nets of width . I also would like to thank Max Kleiman-Weiner for useful comments and discussion. Finally, I thank Zhou Lu for pointing out a serious error what used to be Theorem 3 in a previous version of this article. I have removed that result.

## 5. Proof of Theorem 2

We first treat the case

 f=sup1≤α≤Ngα,gα:[0,1]d→Raffine

when is convex. We seek to show that can be exactly represented by a net with input dimension hidden layer width , and depth Our proof relies on the following observation.

###### Lemma 3.

Fix let be an arbitrary function, and be affine. Define an invertible affine transformation by

 A(x,y)=(x,L(x)+y).

Then the image of the graph of under

 A∘ReLU∘A−1

is the graph of viewed as a function on

###### Proof.

We have Hence, for each we have

 A∘ReLU∘A−1(x,T(x)) =(x,(T(x)−L(x))1{T(x)−L(x)>0}+L(x)) =(x,max{T(x),L(x)}).

We now construct a neural net that computes Define invertible affine functions by

 Aα(x,xd+1):=(x,gα(x)+xd+1),x=(x1,…,xd),

and set

 Hα:=Aα∘ReLU∘A−1α.

Further, define

 Hout:=ReLU∘⟨→ed+1,⋅⟩ (13)

where is the

st standard basis vector so that

is the linear map from to that maps to Finally, set

 Hin:=ReLU∘(id,0),

where maps to the graph of the zero function. Note that the in this initial layer is linear. With this notation, repeatedly using Lemma 3, we find that

 Hout∘HN∘⋯∘H1∘Hin

therefore has input dimension hidden layer width depth and computes exactly.

Next, consider the general case when is given by

 f=g−h,g=sup1≤α≤Ngα,h=sup1≤β≤Mhβ

as in (11). For this situation, we use a different way of computing the maximum using nets.

###### Lemma 4.

There exists a net with input dimension hidden layer width , output dimension and depth such that

 M(x,y)=max{x,y},x∈R,y∈R+.
###### Proof.

Set and define

 M=ReLU∘A2∘ReLU∘A1.

We have for each

 fM(x,y)=ReLU((x−y)1{x−y>0}+y)=max{x,y},

as desired. ∎

We now describe how to construct a net with input dimension , hidden layer width output dimension and depth that exactly computes . We use width to copy the input , width to compute successive maximums of the positive affine functions using the net from Lemma 4 above, and width as memory in which we store while computing The final layer computes the difference

## 6. Proof of Theorem 1

We begin by showing (8) and (9). Suppose is convex and fix A simple discretization argument shows that there exists a piecewise affine convex function such that By Theorem 2, can be a exactly represented by a net with hidden layer width This proves (8). In the case that is Lipschitz, we use the following, a special case of Lemma 4.1 in [3].

###### Proposition 5.

Suppose is convex and Lipschitz with Lipschitz constant . Then for every there exist affine maps such that

 ∥∥∥f−sup1≤j≤kAj∥∥∥C0≤72Ld3/2k−2/d.

Combining this result with Theorem 2 proves (9). We turn to checking (5) and (10). We need the following observations, which seems to be well-known but not written down in the literature.

###### Lemma 6.

Let be a net with input dimension a single hidden layer of width and output dimension There exists another net that computes the same function as but has input dimension and hidden layers with width

###### Proof.

Denote by the affine functions computed by each neuron in the hidden layer of so that

 fN(x)=ReLU(b+n∑j=1cjReLU(Aj(x))).

Let be sufficiently large that

 T+k∑j=1cjReLU(Aj(x))>0,∀1≤k≤n,  x∈[0,1]d.

The affine transformations computed by the hidden layer of are then

 ˜A1(x):=(x,Aj(x),T)and˜An+2(x,y,z)=z−T+b,x∈Rd,y,z∈R

and

 ˜Aj(x,y,z)=(x,Aj(x),z+cj−1y),j=2,…,n+1.

We are essentially using width to copy in the input variable, width to compute each and width to store the output. ∎

Recall that positive continuous functions can be arbitrarily well-approximated by smooth functions and hence by nets with a single hidden layer (see e.g. Theorem 3.1 [11]). The relation (5) therefore follows from Lemma 6. Similarly, by Theorem 3.1 in [11], if is smooth, then there exists and a constant depending only on the maximum value of the first derivatives of such that

 infN∥f−fN∥≤Cfn−1/d,

where the infimum is over nets with a single hidden layer of width . Combining this with Lemma 6 proves (10).

It remains to prove (6) and (7). To do this, fix a positive continuous function with modulus of continuity Recall that the volume of the unit -simplex is and fix Consider the partition

 [0,1]d=d!/ωf(ε)d⋃j=1Pj

of into copies of times the standard -simplex. Define to be a piecewise linear approximation to obtained by setting equal to on the vertices of the ’s and taking to be affine on their interiors. Since the diameter of each is we have

 ∥f−fε∥C0≤ε.

Next, since is a piecewise affine function, by Theorem 2.1 in [1] (see Theorem 2), we may write

 fε=gε−hε,

where are convex, positive, and piecewise affine. Applying Theorem 2 completes the proof of (6) and (7). ∎