Understanding Deep Neural Networks with Rectified Linear Units

11/04/2016 ∙ by Raman Arora, et al. ∙ Johns Hopkins University 0

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give the first-ever polynomial time (in the size of data) algorithm to train to global optimality a ReLU DNN with one hidden layer, assuming the input dimension and number of nodes of the network as fixed constants. We also improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of "hard" functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number k there exists a function representable by a ReLU DNN with k^2 hidden layers and total size k^3, such that any ReLU DNN with at most k hidden layers will require at least 1/2k^k+1-1 total nodes. Finally, we construct a family of R^n→R piecewise linear functions for n≥ 2 (also smoothly parameterized), whose number of affine pieces scales exponentially with the dimension n at any fixed size and depth. To the best of our knowledge, such a construction with exponential dependence on n has not been achieved by previous families of "hard" functions in the neural nets literature. This construction utilizes the theory of zonotopes from polyhedral theory.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs) provide an excellent family of hypotheses for machine learning tasks such as classification. Neural networks with a single hidden layer of finite size can represent any continuous function on a compact subset of

arbitrary well. The universal approximation result was first given by Cybenko in 1989 for sigmoidal activation function 

(Cybenko, 1989), and later generalized by Hornik to an arbitrary bounded and nonconstant activation function Hornik (1991)

. Furthermore, neural networks have finite VC dimension (depending polynomially on the number of edges in the network), and therefore, are PAC (probably approximately correct) learnable using a sample of size that is polynomial in the size of the networks 

Anthony & Bartlett (1999). However, neural networks based methods were shown to be computationally hard to learn (Anthony & Bartlett, 1999) and had mixed empirical success. Consequently, DNNs fell out of favor by late 90s.

Recently, there has been a resurgence of DNNs with the advent of deep learning 

LeCun et al. (2015). Deep learning, loosely speaking, refers to a suite of computational techniques that have been developed recently for training DNNs. It started with the work of Hinton et al. (2006), which gave empirical evidence that if DNNs are initialized properly (for instance, using unsupervised pre-training), then we can find good solutions in a reasonable amount of runtime. This work was soon followed by a series of early successes of deep learning at significantly improving the state-of-the-art in speech recognition Hinton et al. (2012)

. Since then, deep learning has received immense attention from the machine learning community with several state-of-the-art AI systems in speech recognition, image classification, and natural language processing based on deep neural nets 

Hinton et al. (2012); Dahl et al. (2013); Krizhevsky et al. (2012); Le (2013); Sutskever et al. (2014)

. While there is less of evidence now that pre-training actually helps, several other solutions have since been put forth to address the issue of efficiently training DNNs. These include heuristics such as dropouts 

Srivastava et al. (2014)

, but also considering alternate deep architectures such as convolutional neural networks 

Sermanet et al. (2014)

, deep belief networks 

Hinton et al. (2006)

, and deep Boltzmann machines 

Salakhutdinov & Hinton (2009). In addition, deep architectures based on new non-saturating activation functions have been suggested to be more effectively trainable – the most successful and widely popular of these is the rectified linear unit (ReLU) activation, i.e., , which is the focus of study in this paper.

In this paper, we formally study deep neural networks with rectified linear units; we refer to these deep architectures as ReLU DNNs. Our work is inspired by these recent attempts to understand the reason behind the successes of deep learning, both in terms of the structure of the functions represented by DNNs, Telgarsky (2015, 2016); Kane & Williams (2015); Shamir (2016), as well as efforts which have tried to understand the non-convex nature of the training problem of DNNs better Kawaguchi (2016); Haeffele & Vidal (2015). Our investigation of the function space represented by ReLU DNNs also takes inspiration from the classical theory of circuit complexity; we refer the reader to Arora & Barak (2009); Shpilka & Yehudayoff (2010); Jukna (2012); Saptharishi (2014); Allender (1998) for various surveys of this deep and fascinating field. In particular, our gap results are inspired by results like the ones by Hastad Hastad (1986), Razborov Razborov (1987) and Smolensky Smolensky (1987) which show a strict separation of complexity classes. We make progress towards similar statements with deep neural nets with ReLU activation.

1.1 Notation and Definitions

We extend the ReLU activation function to vectors

through entry-wise operation: . For any , let and

denote the class of affine and linear transformations from

, respectively.

Definition 1.

[ReLU DNNs, depth, width, size] For any number of hidden layers , input and output dimensions , a ReLU DNN is given by specifying a sequence of natural numbers representing widths of the hidden layers, a set of affine transformations for and a linear transformation corresponding to weights of the hidden layers. Such a ReLU DNN is called a -layer ReLU DNN, and is said to have hidden layers. The function computed or represented by this ReLU DNN is

(1.1)

where denotes function composition. The depth of a ReLU DNN is defined as . The width of a ReLU DNN is . The size of the ReLU DNN is .

Definition 2.

We denote the class of ReLU DNNs with hidden layers of widths by , i.e.

(1.2)
Definition 3.

[Piecewise linear functions] We say a function is continuous piecewise linear (PWL) if there exists a finite set of polyhedra whose union is , and is affine linear over each polyhedron (note that the definition automatically implies continuity of the function because the affine regions are closed and cover , and affine functions are continuous). The number of pieces of is the number of maximal connected subsets of over which is affine linear (which is finite).

Many of our important statements will be phrased in terms of the following simplex.

Definition 4.

Let be any positive real number and be any natural number. Define the following set:

2 Exact characterization of function class represented by ReLU DNNs

One of the main advantages of DNNs is that they can represent a large family of functions with a relatively small number of parameters. In this section, we give an exact characterization of the functions representable by ReLU DNNs. Moreover, we show how structural properties of ReLU DNNs, specifically their depth and width, affects their expressive power. It is clear from definition that any function from represented by a ReLU DNN is a continuous piecewise linear (PWL) function. In what follows, we show that the converse is also true, that is any PWL function is representable by a ReLU DNN. In particular, the following theorem establishes a one-to-one correspondence between the class of ReLU DNNs and PWL functions.

Theorem 2.1.

Every ReLU DNN represents a piecewise linear function, and every piecewise linear function can be represented by a ReLU DNN with at most depth.

Proof Sketch: It is clear that any function represented by a ReLU DNN is a PWL function. To see the converse, we first note that any PWL function can be represented as a linear combination of piecewise linear convex functions. More formally, by Theorem 1 in (Wang & Sun, 2005), for every piecewise linear function , there exists a finite set of affine linear functions and subsets (not necessarily disjoint) where each is of cardinality at most , such that

(2.1)

where for all . Since a function of the form is a piecewise linear convex function with at most pieces (because ), Equation (2.1) says that any continuous piecewise linear function (not necessarily convex) can be obtained as a linear combination of piecewise linear convex functions each of which has at most affine pieces. Furthermore, Lemmas D.1, D.2 and D.3 in the Appendix (see supplementary material), show that composition, addition, and pointwise maximum of PWL functions are also representable by ReLU DNNs. In particular, in Lemma D.3 we note that is implementable by a two layer ReLU network and use this construction in an inductive manner to show that maximum of numbers can be computed using a ReLU DNN with depth at most .

While Theorem 2.1 gives an upper bound on the depth of the networks needed to represent all continuous piecewise linear functions on , it does not give any tight bounds on the size of the networks that are needed to represent a given piecewise linear function. For , we give tight bounds on size as follows:

Theorem 2.2.

Given any piecewise linear function with pieces there exists a 2-layer DNN with at most nodes that can represent . Moreover, any 2-layer DNN that represents has size at least .

Finally, the main result of this section follows from Theorem 2.1, and well-known facts that the piecewise linear functions are dense in the family of compactly supported continuous functions and the family of compactly supported continuous functions are dense in  (Royden & Fitzpatrick, 2010)). Recall that is the space of Lebesgue integrable functions such that , where is the Lebesgue measure on (see Royden Royden & Fitzpatrick (2010)).

Theorem 2.3.

Every function in can be arbitrarily well-approximated in the norm (which for a function is given by ) by a ReLU DNN function with at most hidden layers. Moreover, for , any such function can be arbitrarily well-approximated by a 2-layer DNN, with tight bounds on the size of such a DNN in terms of the approximation.

Proofs of Theorems 2.2 and 2.3 are provided in Appendix A. We would like to remark that a weaker version of Theorem 2.1 was observed in (Goodfellow et al., 2013, Proposition 4.1) (with no bound on the depth), along with a universal approximation theorem (Goodfellow et al., 2013, Theorem 4.3) similar to Theorem 2.3. The authors of Goodfellow et al. (2013) also used a previous result of Wang (Wang, 2004) for obtaining their result. In a subsequent work Boris Hanin (Hanin, 2017) has, among other things, found a width and depth upper bound for ReLU net representation of positive PWL functions on . The width upperbound is n+3 for general positive PWL functions and for convex positive PWL functions. For convex positive PWL functions his depth upper bound is sharp if we disallow dead ReLUs.

3 Benefits of Depth

Success of deep learning has been largely attributed to the depth of the networks, i.e. number of successive affine transformations followed by nonlinearities, which is shown to be extracting hierarchical features from the data. In contrast, traditional machine learning frameworks including support vector machines, generalized linear models, and kernel machines can be seen as instances of shallow networks, where a linear transformation acts on a single layer of nonlinear feature extraction. In this section, we explore the importance of depth in ReLU DNNs. In particular, in Section 

3.1, we provide a smoothly parametrized family of “hard” functions representable by ReLU DNNs, which requires exponentially larger size for a shallower network. Furthermore, in Section 3.2, we construct a continuum of “hard” functions representable by ReLU DNNs, which to the best of our knowledge is the first explicit construction of ReLU DNN functions whose number of affine pieces grows exponentially with input dimension. The proofs of the theorems in this section are provided in Appendix B.

3.1 Circuit lower bounds for ReLU DNNs

In this section, we are only concerned about ReLU DNNs, i.e. both input and output dimensions are equal to one. The following theorem shows the depth-size trade-off in this setting.

Theorem 3.1.

For every pair of natural numbers , , there exists a family of hard functions representable by a -layer ReLU DNN of width such that if it is also representable by a -layer ReLU DNN for any , then this -layer ReLU DNN has size at least .

In fact our family of hard functions described above has a very intricate structure as stated below.

Theorem 3.2.

For every , , every member of the family of hard functions in Theorem 3.1 has pieces and this family can be parametrized by

(3.1)

i.e., for every point in the set above, there exists a distinct function with the stated properties.

The following is an immediate corollary of Theorem 3.1 by choosing the parameters carefully.

Corollary 3.3.

For every and , there is a family of functions defined on the real line such that every function from this family can be represented by a -layer DNN with size and if is represented by a -layer DNN, then this DNN must have size at least . Moreover, this family can be parametrized as, .

A particularly illuminative special case is obtained by setting in Corollary 3.3:

Corollary 3.4.

For every natural number , there is a family of functions parameterized by the set such that any from this family can be represented by a -layer DNN with nodes, and every -layer DNN that represents needs at least nodes.

We can also get hardness of approximation versions of Theorem 3.1 and Corollaries 3.3 and 3.4, with the same gaps (upto constant terms), using the following theorem.

Theorem 3.5.

For every , , there exists a function that can be represented by a -layer ReLU DNN with nodes in each layer, such that for all and the following holds:

where is the family of functions representable by ReLU DNNs with depth at most , and size at most .

The depth-size trade-off results in Theorems 3.1, and 3.5 extend and improve Telgarsky’s theorems from (Telgarsky, 2015, 2016) in the following three ways:

  • If we use our Theorem 3.5 to the pair of neural nets considered by Telgarsky in Theorem in Telgarsky (2016) which are at depths (of size also scaling as ) and then for this purpose of approximation in the norm we would get a size lower bound for the shallower net which scales as which is exponentially (in depth) larger than the lower bound of that Telgarsky can get for this scenario.

  • Telgarsky’s family of hard functions is parameterized by a single natural number . In contrast, we show that for every pair of natural numbers and , and a point from the set in equation 3.1, there exists a “hard” function which to be represented by a depth network would need a size of at least . With the extra flexibility of choosing the parameter , for the purpose of showing gaps in representation ability of deep nets we can shows size lower bounds which are super-exponential in depth as explained in Corollaries 3.3 and 3.4.

  • A characteristic feature of the “hard” functions in Boolean circuit complexity is that they are usually a countable family of functions and not a “smooth” family of hard functions. In fact, in the last section of Telgarsky (2015), Telgarsky states this as a “weakness” of the state-of-the-art results on “hard” functions for both Boolean circuit complexity and neural nets research. In contrast, we provide a smoothly parameterized family of “hard” functions in Section 3.1 (parametrized by the set in equation 3.1). Such a continuum of hard functions wasn’t demonstrated before this work.

We point out that Telgarsky’s results in (Telgarsky, 2016) apply to deep neural nets with a host of different activation functions, whereas, our results are specifically for neural nets with rectified linear units. In this sense, Telgarsky’s results from (Telgarsky, 2016) are more general than our results in this paper, but with weaker gap guarantees. Eldan-Shamir (Shamir, 2016; Eldan & Shamir, 2016) show that there exists an function that can be represented by a 3-layer DNN, that takes exponential in number of nodes to be approximated to within some constant by a 2-layer DNN. While their results are not immediately comparable with Telgarsky’s or our results, it is an interesting open question to extend their results to a constant depth hierarchy statement analogous to the recent result of Rossman et al (Rossman et al., 2015). We also note that in last few years, there has been much effort in the community to show size lowerbounds on ReLU DNNs trying to approximate various classes of functions which are themselves not necessarily exactly representable by ReLU DNNs (Yarotsky, 2016; Liang & Srikant, 2016; Safran & Shamir, 2017).

3.2 A continuum of hard functions for for

One measure of complexity of a family of “hard” functions represented by ReLU DNNs is the asymptotics of the number of pieces as a function of dimension , depth and size of the ReLU DNNs. More precisely, suppose one has a family of functions such that for every the family contains at least one function representable by a ReLU DNN with depth at most and maximum width at most . The following definition formalizes a notion of complexity for such a .

Definition 5 ().

The measure is defined as the maximum number of pieces (see Definition 3) of a function from that can be represented by a ReLU DNN with depth at most and maximum width at most .

Similar measures have been studied in previous works Montufar et al. (2014); Pascanu et al. (2013); Raghu et al. (2016). The best known families are the ones from Theorem 4 of (Montufar et al., 2014) and a mild generalization of Theorem  of (Telgarsky, 2016) to layers of ReLU activations with width ; these constructions achieve and , respectively. At the end of this section we would explain the precise sense in which we improve on these numbers. An analysis of this complexity measure is done using integer programming techniques in (Serra et al., 2017).

Definition 6.

Let . The zonotope formed by is defined as

The set of vertices of will be denoted by . The support function associated with the zonotope is defined as

The following results are well-known in the theory of zonotopes (Ziegler, 1995).

Theorem 3.6.

The following are all true.

  1. . The set of such that this does not hold at equality is a 0 measure set.

  2. and is therefore a piecewise linear function with pieces.

  3. .

Definition 7 (extremal zonotope set).

The set will denote the set of such that . is the so-called “extremal zonotope set”, which is a subset of , whose complement has zero Lebesgue measure in .

Lemma 3.7.

Given any , there exists a 2-layer ReLU DNN with size which represents the function .

(a)
(b)
(c)
Figure 1: We fix the vectors for a two hidden layer hard function as Left: A specific hard function induced by norm: where and . Note that in this case the function can be seen as a composition of with -norm . Middle: A typical hard function with generators and . Note how increasing the number of zonotope generators makes the function more complex. Right: A harder function from family with the same set of generators but one more hidden layer . Note how increasing the depth make the function more complex. (For illustrative purposes we plot only the part of the function which lies above zero.)
Definition 8.

For and , we define a function which is piecewise linear over the segments defined as follows: for all , , and and for , is a linear continuation of the piece over the interval . Note that the function has pieces, with the leftmost piece having slope . Furthermore, for , we denote the composition of the functions by

Proposition 3.8.

Given any tuple and any point

the function has pieces and it can be represented by a layer ReLU DNN with size .

Finally, we are ready to state the main result of this section.

Theorem 3.9.

For every tuple of natural numbers and , there exists a family of functions, which we call with the following properties:

  1. Every is representable by a ReLU DNN of depth and size , and has pieces.

  2. Consider any . If is represented by a -layer DNN for any , then this -layer DNN has size at least .

  3. The family is in one-to-one correspondence with


Comparison to the results in (Montufar et al., 2014)

Firstly we note that the construction in (Montufar et al., 2014) requires all the hidden layers to have width at least as big as the input dimensionality . In contrast, we do not impose such restrictions and the network size in our construction is independent of the input dimensionality. Thus our result probes networks with bottleneck architectures whose complexity cant be seen from their result.

Secondly, in terms of our complexity measure, there seem to be regimes where our bound does better. One such regime, for example, is when and , by setting in our construction .

Thirdly, it is not clear to us whether the construction in (Montufar et al., 2014) gives a smoothly parameterized family of functions other than by introducing small perturbations of the construction in their paper. In contrast, we have a smoothly parameterized family which is in one-to-one correspondence with a well-understood manifold like the higher-dimensional torus.

4 Training 2-layer ReLU DNNs to global optimality

In this section we consider the following empirical risk minimization problem. Given data points , , find the function represented by 2-layer ReLU DNNs of width , that minimizes the following optimization problem

(4.1)

where is a convex loss function

(common loss functions are the squared loss,

, and the hinge loss function given by ). Our main result of this section gives an algorithm to solve the above empirical risk minimization problem to global optimality.

Theorem 4.1.

There exists an algorithm to find a global optimum of Problem 4.1 in time . Note that the running time is polynomial in the data size for fixed .

Proof Sketch: A full proof of Theorem 4.1 is included in Appendix C. Here we provide a sketch of the proof. When the empirical risk minimization problem is viewed as an optimization problem in the space of weights of the ReLU DNN, it is a nonconvex, quadratic problem. However, one can instead search over the space of functions representable by 2-layer DNNs by writing them in the form similar to (2.1

). This breaks the problem into two parts: a combinatorial search and then a convex problem that is essentially linear regression with linear inequality constraints. This enables us to guarantee global optimality.

1:function ERM() Where
2:      All possible instantiations of top layer weights
3:      All possible partitions of data into two parts
4:     
5:      Counter
6:     for  do
7:          for  do
8:               
9:               
10:          end for
11:          
12:     end for
13:     return corresponding to OPT’s iterate
14:end function
Algorithm 1 Empirical Risk Minimization

Let and for and . If we denote the -th row of the matrix by , and write to denote the -th coordinates of the vectors respectively, due to homogeneity of ReLU gates, the network output can be represented as

where , and for all . For any hidden node , the pair induces a partition on the dataset, given by and . Algorithm 1 proceeds by generating all combinations of the partitions as well as the top layer weights , and minimizing the loss subject to the constraints and which are imposed for all , which is a convex program.

Algorithm 1 implements the empirical risk minimization (ERM) rule for training ReLU DNN with one hidden layer. To the best of our knowledge there is no other known algorithm that solves the ERM problem to global optimality. We note that due to known hardness results exponential dependence on the input dimension is unavoidable Blum & Rivest (1992); Shalev-Shwartz & Ben-David (2014); Algorithm 1 runs in time polynomial in the number of data points. To the best of our knowledge there is no hardness result known which rules out empirical risk minimization of deep nets in time polynomial in circuit size or data size. Thus our training result is a step towards resolving this gap in the complexity literature.

A related result for improperly learning ReLUs has been recently obtained by Goel et al (Goel et al., 2016). In contrast, our algorithm returns a ReLU DNN from the class being learned. Another difference is that their result considers the notion of reliable learning as opposed to the empirical risk minimization objective considered in (4.1).

5 Discussion

The running time of the algorithm that we give in this work to find the exact global minima of a two layer ReLU-DNN is exponential in the input dimension and the number of hidden nodes . The exponential dependence on can not be removed unless ; see Shalev-Shwartz & Ben-David (2014); Blum & Rivest (1992); DasGupta et al. (1995). However, we are not aware of any complexity results which would rule out the possibility of an algorithm which trains to global optimality in time that is polynomial in the data size and/or the number of hidden nodes, assuming that the input dimension is a fixed constant. Resolving this dependence on network size would be another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good open question for future research, in our opinion. Perhaps an even better breakthrough would be to get optimal training algorithms for DNNs with two or more hidden layers and this seems like a substantially harder nut to crack. It would also be a significant breakthrough to get gap results between consecutive constant depths or between logarithmic and constant depths.

Acknowledgments

We would like to thank Christian Tjandraatmadja for pointing out a subtle error in a previous version of the paper, which affected the complexity results for the number of linear regions in our constructions in Section 3.2. Anirbit would like to thank Ramprasad Saptharishi, Piyush Srivastava and Rohit Gurjar for extensive discussions on Boolean and arithmetic circuit complexity. This paper has been immensely influenced by the perspectives gained during those extremely helpful discussions. Amitabh Basu gratefully acknowledges support from the NSF grant CMMI1452820. Raman Arora was supported in part by NSF BIGDATA grant IIS-1546482.

References

Appendix A Expressing piecewise linear functions using ReLU DNNs

Proof of Theorem 2.2.

Any continuous piecewise linear function which has pieces can be specified by three pieces of information, the slope of the left most piece, the coordinates of the non-differentiable points specified by a tuple (indexed from left to right) and the slope of the rightmost piece. A tuple uniquely specifies a piecewise linear function from and vice versa. Given such a tuple, we construct a -layer DNN which computes the same piecewise linear function.

One notes that for any , the function

(A.1)

is equal to , which can be implemented by a 2-layer ReLU DNN with size 1. Similarly, any function of the form,

(A.2)

is equal to , which can be implemented by a 2-layer ReLU DNN with size 1. The parameters will be called the slopes of the function, and will be called the breakpoint of the function.If we can write the given piecewise linear function as a sum of functions of the form (A.1) and (A.2), then by Lemma D.2 we would be done. It turns out that such a decomposition of any piece PWL function as a sum of flaps can always be arranged where the breakpoints of the flaps all are all contained in the breakpoints of . First, observe that adding a constant to a function does not change the complexity of the ReLU DNN expressing it, since this corresponds to a bias on the output node. Thus, we will assume that the value of at the last break point is . We now use a single function of the form (A.1) with slope and breakpoint , and functions of the form (A.2) with slopes and breakpoints , respectively. Thus, we wish to express . Such a decomposition of would be valid if we can find values for such that the slope of the above sum is for , the slope of the above sum is for , and for each we have
The above corresponds to asking for the existence of a solution to the following set of simultaneous linear equations in :

It is easy to verify that the above set of simultaneous linear equations has a unique solution. Indeed, must equal , and then one can solve for starting from the last equation and then back substitute to compute . The lower bound of on the size for any -layer ReLU DNN that expresses a piece function follows from Lemma D.6. ∎

One can do better in terms of size when the rightmost piece of the given function is flat, i.e., . In this case , which means that ; thus, the decomposition of above is of size . A similar construction can be done when . This gives the following statement which will be useful for constructing our forthcoming hard functions.

Corollary A.1.

If the rightmost or leftmost piece of a piecewise linear function has slope, then we can compute such a piece function using a -layer DNN with size .

Proof of theorem 2.3.

Since any piecewise linear function is representable by a ReLU DNN by Corollary 2.1, the proof simply follows from the fact that the family of continuous piecewise linear functions is dense in any space, for . ∎

Appendix B Benefits of Depth

b.1 Constructing a continuum of hard functions for ReLU DNNs at every depth and every width

Lemma B.1.

For any , , and , if we compose the functions the resulting function is a piecewise linear function with at most pieces, i.e.,

is piecewise linear with at most pieces, with of these pieces in the range (see Figure 2). Moreover, in each piece in the range , the function is affine with minimum value and maximum value .

Proof.

Simple induction on . ∎

Proof of Theorem 3.2.

Given and , choose any point

By Definition 8, each , is a piecewise linear function with pieces and the leftmost piece having slope . Thus, by Corollary A.1, each , can be represented by a 2-layer ReLU DNN with size . Using Lemma D.1, can be represented by a layer DNN with size ; in fact, each hidden layer has exactly nodes. ∎

Proof of Theorem 3.1.

Follows from Theorem 3.2 and Lemma D.6. ∎

Figure 2: Top: with with 3 pieces in the range . Middle: with with 2 pieces in the range . Bottom: with pieces in the range . The dotted line in the bottom panel corresponds to the function in the top panel. It shows that for every piece of the dotted graph, there is a full copy of the graph in the middle panel.
Proof of Theorem 3.5.

Given and define and where . Thus, is representable by a ReLU DNN of width and depth by Lemma D.1. In what follows, we want to give a lower bound on the distance of from any continuous -piecewise linear comparator . The function