1 Introduction
Deep neural networks (DNNs) provide an excellent family of hypotheses for machine learning tasks such as classification. Neural networks with a single hidden layer of finite size can represent any continuous function on a compact subset of
arbitrary well. The universal approximation result was first given by Cybenko in 1989 for sigmoidal activation function
(Cybenko, 1989), and later generalized by Hornik to an arbitrary bounded and nonconstant activation function Hornik (1991). Furthermore, neural networks have finite VC dimension (depending polynomially on the number of edges in the network), and therefore, are PAC (probably approximately correct) learnable using a sample of size that is polynomial in the size of the networks
Anthony & Bartlett (1999). However, neural networks based methods were shown to be computationally hard to learn (Anthony & Bartlett, 1999) and had mixed empirical success. Consequently, DNNs fell out of favor by late 90s.Recently, there has been a resurgence of DNNs with the advent of deep learning
LeCun et al. (2015). Deep learning, loosely speaking, refers to a suite of computational techniques that have been developed recently for training DNNs. It started with the work of Hinton et al. (2006), which gave empirical evidence that if DNNs are initialized properly (for instance, using unsupervised pretraining), then we can find good solutions in a reasonable amount of runtime. This work was soon followed by a series of early successes of deep learning at significantly improving the stateoftheart in speech recognition Hinton et al. (2012). Since then, deep learning has received immense attention from the machine learning community with several stateoftheart AI systems in speech recognition, image classification, and natural language processing based on deep neural nets
Hinton et al. (2012); Dahl et al. (2013); Krizhevsky et al. (2012); Le (2013); Sutskever et al. (2014). While there is less of evidence now that pretraining actually helps, several other solutions have since been put forth to address the issue of efficiently training DNNs. These include heuristics such as dropouts
Srivastava et al. (2014), but also considering alternate deep architectures such as convolutional neural networks
Sermanet et al. (2014)Hinton et al. (2006), and deep Boltzmann machines
Salakhutdinov & Hinton (2009). In addition, deep architectures based on new nonsaturating activation functions have been suggested to be more effectively trainable – the most successful and widely popular of these is the rectified linear unit (ReLU) activation, i.e., , which is the focus of study in this paper.In this paper, we formally study deep neural networks with rectified linear units; we refer to these deep architectures as ReLU DNNs. Our work is inspired by these recent attempts to understand the reason behind the successes of deep learning, both in terms of the structure of the functions represented by DNNs, Telgarsky (2015, 2016); Kane & Williams (2015); Shamir (2016), as well as efforts which have tried to understand the nonconvex nature of the training problem of DNNs better Kawaguchi (2016); Haeffele & Vidal (2015). Our investigation of the function space represented by ReLU DNNs also takes inspiration from the classical theory of circuit complexity; we refer the reader to Arora & Barak (2009); Shpilka & Yehudayoff (2010); Jukna (2012); Saptharishi (2014); Allender (1998) for various surveys of this deep and fascinating field. In particular, our gap results are inspired by results like the ones by Hastad Hastad (1986), Razborov Razborov (1987) and Smolensky Smolensky (1987) which show a strict separation of complexity classes. We make progress towards similar statements with deep neural nets with ReLU activation.
1.1 Notation and Definitions
We extend the ReLU activation function to vectors
through entrywise operation: . For any , let anddenote the class of affine and linear transformations from
, respectively.Definition 1.
[ReLU DNNs, depth, width, size] For any number of hidden layers , input and output dimensions , a ReLU DNN is given by specifying a sequence of natural numbers representing widths of the hidden layers, a set of affine transformations for and a linear transformation corresponding to weights of the hidden layers. Such a ReLU DNN is called a layer ReLU DNN, and is said to have hidden layers. The function computed or represented by this ReLU DNN is
(1.1) 
where denotes function composition. The depth of a ReLU DNN is defined as . The width of a ReLU DNN is . The size of the ReLU DNN is .
Definition 2.
We denote the class of ReLU DNNs with hidden layers of widths by , i.e.
(1.2) 
Definition 3.
[Piecewise linear functions] We say a function is continuous piecewise linear (PWL) if there exists a finite set of polyhedra whose union is , and is affine linear over each polyhedron (note that the definition automatically implies continuity of the function because the affine regions are closed and cover , and affine functions are continuous). The number of pieces of is the number of maximal connected subsets of over which is affine linear (which is finite).
Many of our important statements will be phrased in terms of the following simplex.
Definition 4.
Let be any positive real number and be any natural number. Define the following set:
2 Exact characterization of function class represented by ReLU DNNs
One of the main advantages of DNNs is that they can represent a large family of functions with a relatively small number of parameters. In this section, we give an exact characterization of the functions representable by ReLU DNNs. Moreover, we show how structural properties of ReLU DNNs, specifically their depth and width, affects their expressive power. It is clear from definition that any function from represented by a ReLU DNN is a continuous piecewise linear (PWL) function. In what follows, we show that the converse is also true, that is any PWL function is representable by a ReLU DNN. In particular, the following theorem establishes a onetoone correspondence between the class of ReLU DNNs and PWL functions.
Theorem 2.1.
Every ReLU DNN represents a piecewise linear function, and every piecewise linear function can be represented by a ReLU DNN with at most depth.
Proof Sketch: It is clear that any function represented by a ReLU DNN is a PWL function. To see the converse, we first note that any PWL function can be represented as a linear combination of piecewise linear convex functions. More formally, by Theorem 1 in (Wang & Sun, 2005), for every piecewise linear function , there exists a finite set of affine linear functions and subsets (not necessarily disjoint) where each is of cardinality at most , such that
(2.1) 
where for all . Since a function of the form is a piecewise linear convex function with at most pieces (because ), Equation (2.1) says that any continuous piecewise linear function (not necessarily convex) can be obtained as a linear combination of piecewise linear convex functions each of which has at most affine pieces. Furthermore, Lemmas D.1, D.2 and D.3 in the Appendix (see supplementary material), show that composition, addition, and pointwise maximum of PWL functions are also representable by ReLU DNNs. In particular, in Lemma D.3 we note that is implementable by a two layer ReLU network and use this construction in an inductive manner to show that maximum of numbers can be computed using a ReLU DNN with depth at most .
While Theorem 2.1 gives an upper bound on the depth of the networks needed to represent all continuous piecewise linear functions on , it does not give any tight bounds on the size of the networks that are needed to represent a given piecewise linear function. For , we give tight bounds on size as follows:
Theorem 2.2.
Given any piecewise linear function with pieces there exists a 2layer DNN with at most nodes that can represent . Moreover, any 2layer DNN that represents has size at least .
Finally, the main result of this section follows from Theorem 2.1, and wellknown facts that the piecewise linear functions are dense in the family of compactly supported continuous functions and the family of compactly supported continuous functions are dense in (Royden & Fitzpatrick, 2010)). Recall that is the space of Lebesgue integrable functions such that , where is the Lebesgue measure on (see Royden Royden & Fitzpatrick (2010)).
Theorem 2.3.
Every function in can be arbitrarily wellapproximated in the norm (which for a function is given by ) by a ReLU DNN function with at most hidden layers. Moreover, for , any such function can be arbitrarily wellapproximated by a 2layer DNN, with tight bounds on the size of such a DNN in terms of the approximation.
Proofs of Theorems 2.2 and 2.3 are provided in Appendix A. We would like to remark that a weaker version of Theorem 2.1 was observed in (Goodfellow et al., 2013, Proposition 4.1) (with no bound on the depth), along with a universal approximation theorem (Goodfellow et al., 2013, Theorem 4.3) similar to Theorem 2.3. The authors of Goodfellow et al. (2013) also used a previous result of Wang (Wang, 2004) for obtaining their result. In a subsequent work Boris Hanin (Hanin, 2017) has, among other things, found a width and depth upper bound for ReLU net representation of positive PWL functions on . The width upperbound is n+3 for general positive PWL functions and for convex positive PWL functions. For convex positive PWL functions his depth upper bound is sharp if we disallow dead ReLUs.
3 Benefits of Depth
Success of deep learning has been largely attributed to the depth of the networks, i.e. number of successive affine transformations followed by nonlinearities, which is shown to be extracting hierarchical features from the data. In contrast, traditional machine learning frameworks including support vector machines, generalized linear models, and kernel machines can be seen as instances of shallow networks, where a linear transformation acts on a single layer of nonlinear feature extraction. In this section, we explore the importance of depth in ReLU DNNs. In particular, in Section
3.1, we provide a smoothly parametrized family of “hard” functions representable by ReLU DNNs, which requires exponentially larger size for a shallower network. Furthermore, in Section 3.2, we construct a continuum of “hard” functions representable by ReLU DNNs, which to the best of our knowledge is the first explicit construction of ReLU DNN functions whose number of affine pieces grows exponentially with input dimension. The proofs of the theorems in this section are provided in Appendix B.3.1 Circuit lower bounds for ReLU DNNs
In this section, we are only concerned about ReLU DNNs, i.e. both input and output dimensions are equal to one. The following theorem shows the depthsize tradeoff in this setting.
Theorem 3.1.
For every pair of natural numbers , , there exists a family of hard functions representable by a layer ReLU DNN of width such that if it is also representable by a layer ReLU DNN for any , then this layer ReLU DNN has size at least .
In fact our family of hard functions described above has a very intricate structure as stated below.
Theorem 3.2.
For every , , every member of the family of hard functions in Theorem 3.1 has pieces and this family can be parametrized by
(3.1) 
i.e., for every point in the set above, there exists a distinct function with the stated properties.
The following is an immediate corollary of Theorem 3.1 by choosing the parameters carefully.
Corollary 3.3.
For every and , there is a family of functions defined on the real line such that every function from this family can be represented by a layer DNN with size and if is represented by a layer DNN, then this DNN must have size at least . Moreover, this family can be parametrized as, .
A particularly illuminative special case is obtained by setting in Corollary 3.3:
Corollary 3.4.
For every natural number , there is a family of functions parameterized by the set such that any from this family can be represented by a layer DNN with nodes, and every layer DNN that represents needs at least nodes.
We can also get hardness of approximation versions of Theorem 3.1 and Corollaries 3.3 and 3.4, with the same gaps (upto constant terms), using the following theorem.
Theorem 3.5.
For every , , there exists a function that can be represented by a layer ReLU DNN with nodes in each layer, such that for all and the following holds:
where is the family of functions representable by ReLU DNNs with depth at most , and size at most .
The depthsize tradeoff results in Theorems 3.1, and 3.5 extend and improve Telgarsky’s theorems from (Telgarsky, 2015, 2016) in the following three ways:

If we use our Theorem 3.5 to the pair of neural nets considered by Telgarsky in Theorem in Telgarsky (2016) which are at depths (of size also scaling as ) and then for this purpose of approximation in the norm we would get a size lower bound for the shallower net which scales as which is exponentially (in depth) larger than the lower bound of that Telgarsky can get for this scenario.

Telgarsky’s family of hard functions is parameterized by a single natural number . In contrast, we show that for every pair of natural numbers and , and a point from the set in equation 3.1, there exists a “hard” function which to be represented by a depth network would need a size of at least . With the extra flexibility of choosing the parameter , for the purpose of showing gaps in representation ability of deep nets we can shows size lower bounds which are superexponential in depth as explained in Corollaries 3.3 and 3.4.

A characteristic feature of the “hard” functions in Boolean circuit complexity is that they are usually a countable family of functions and not a “smooth” family of hard functions. In fact, in the last section of Telgarsky (2015), Telgarsky states this as a “weakness” of the stateoftheart results on “hard” functions for both Boolean circuit complexity and neural nets research. In contrast, we provide a smoothly parameterized family of “hard” functions in Section 3.1 (parametrized by the set in equation 3.1). Such a continuum of hard functions wasn’t demonstrated before this work.
We point out that Telgarsky’s results in (Telgarsky, 2016) apply to deep neural nets with a host of different activation functions, whereas, our results are specifically for neural nets with rectified linear units. In this sense, Telgarsky’s results from (Telgarsky, 2016) are more general than our results in this paper, but with weaker gap guarantees. EldanShamir (Shamir, 2016; Eldan & Shamir, 2016) show that there exists an function that can be represented by a 3layer DNN, that takes exponential in number of nodes to be approximated to within some constant by a 2layer DNN. While their results are not immediately comparable with Telgarsky’s or our results, it is an interesting open question to extend their results to a constant depth hierarchy statement analogous to the recent result of Rossman et al (Rossman et al., 2015). We also note that in last few years, there has been much effort in the community to show size lowerbounds on ReLU DNNs trying to approximate various classes of functions which are themselves not necessarily exactly representable by ReLU DNNs (Yarotsky, 2016; Liang & Srikant, 2016; Safran & Shamir, 2017).
3.2 A continuum of hard functions for for
One measure of complexity of a family of “hard” functions represented by ReLU DNNs is the asymptotics of the number of pieces as a function of dimension , depth and size of the ReLU DNNs. More precisely, suppose one has a family of functions such that for every the family contains at least one function representable by a ReLU DNN with depth at most and maximum width at most . The following definition formalizes a notion of complexity for such a .
Definition 5 ().
The measure is defined as the maximum number of pieces (see Definition 3) of a function from that can be represented by a ReLU DNN with depth at most and maximum width at most .
Similar measures have been studied in previous works Montufar et al. (2014); Pascanu et al. (2013); Raghu et al. (2016). The best known families are the ones from Theorem 4 of (Montufar et al., 2014) and a mild generalization of Theorem of (Telgarsky, 2016) to layers of ReLU activations with width ; these constructions achieve and , respectively. At the end of this section we would explain the precise sense in which we improve on these numbers. An analysis of this complexity measure is done using integer programming techniques in (Serra et al., 2017).
Definition 6.
Let . The zonotope formed by is defined as
The set of vertices of will be denoted by . The support function associated with the zonotope is defined as
The following results are wellknown in the theory of zonotopes (Ziegler, 1995).
Theorem 3.6.
The following are all true.

. The set of such that this does not hold at equality is a 0 measure set.

and is therefore a piecewise linear function with pieces.

.
Definition 7 (extremal zonotope set).
The set will denote the set of such that . is the socalled “extremal zonotope set”, which is a subset of , whose complement has zero Lebesgue measure in .
Lemma 3.7.
Given any , there exists a 2layer ReLU DNN with size which represents the function .
Definition 8.
For and , we define a function which is piecewise linear over the segments defined as follows: for all , , and and for , is a linear continuation of the piece over the interval . Note that the function has pieces, with the leftmost piece having slope . Furthermore, for , we denote the composition of the functions by
Proposition 3.8.
Given any tuple and any point
the function has pieces and it can be represented by a layer ReLU DNN with size .
Finally, we are ready to state the main result of this section.
Theorem 3.9.
For every tuple of natural numbers and , there exists a family of functions, which we call with the following properties:

Every is representable by a ReLU DNN of depth and size , and has pieces.

Consider any . If is represented by a layer DNN for any , then this layer DNN has size at least .

The family is in onetoone correspondence with
Comparison to the results in (Montufar et al., 2014)
Firstly we note that the construction in (Montufar et al., 2014) requires all the hidden layers to have width at least as big as the input dimensionality . In contrast, we do not impose such restrictions and the network size in our construction is independent of the input dimensionality. Thus our result probes networks with bottleneck architectures whose complexity cant be seen from their result.
Secondly, in terms of our complexity measure, there seem to be regimes where our bound does better. One such regime, for example, is when and , by setting in our construction .
Thirdly, it is not clear to us whether the construction in (Montufar et al., 2014) gives a smoothly parameterized family of functions other than by introducing small perturbations of the construction in their paper. In contrast, we have a smoothly parameterized family which is in onetoone correspondence with a wellunderstood manifold like the higherdimensional torus.
4 Training 2layer ReLU DNNs to global optimality
In this section we consider the following empirical risk minimization problem. Given data points , , find the function represented by 2layer ReLU DNNs of width , that minimizes the following optimization problem
(4.1) 
where is a convex loss function
(common loss functions are the squared loss,
, and the hinge loss function given by ). Our main result of this section gives an algorithm to solve the above empirical risk minimization problem to global optimality.Theorem 4.1.
There exists an algorithm to find a global optimum of Problem 4.1 in time . Note that the running time is polynomial in the data size for fixed .
Proof Sketch: A full proof of Theorem 4.1 is included in Appendix C. Here we provide a sketch of the proof. When the empirical risk minimization problem is viewed as an optimization problem in the space of weights of the ReLU DNN, it is a nonconvex, quadratic problem. However, one can instead search over the space of functions representable by 2layer DNNs by writing them in the form similar to (2.1
). This breaks the problem into two parts: a combinatorial search and then a convex problem that is essentially linear regression with linear inequality constraints. This enables us to guarantee global optimality.
Let and for and . If we denote the th row of the matrix by , and write to denote the th coordinates of the vectors respectively, due to homogeneity of ReLU gates, the network output can be represented as
where , and for all . For any hidden node , the pair induces a partition on the dataset, given by and . Algorithm 1 proceeds by generating all combinations of the partitions as well as the top layer weights , and minimizing the loss subject to the constraints and which are imposed for all , which is a convex program.
Algorithm 1 implements the empirical risk minimization (ERM) rule for training ReLU DNN with one hidden layer. To the best of our knowledge there is no other known algorithm that solves the ERM problem to global optimality. We note that due to known hardness results exponential dependence on the input dimension is unavoidable Blum & Rivest (1992); ShalevShwartz & BenDavid (2014); Algorithm 1 runs in time polynomial in the number of data points. To the best of our knowledge there is no hardness result known which rules out empirical risk minimization of deep nets in time polynomial in circuit size or data size. Thus our training result is a step towards resolving this gap in the complexity literature.
A related result for improperly learning ReLUs has been recently obtained by Goel et al (Goel et al., 2016). In contrast, our algorithm returns a ReLU DNN from the class being learned. Another difference is that their result considers the notion of reliable learning as opposed to the empirical risk minimization objective considered in (4.1).
5 Discussion
The running time of the algorithm that we give in this work to find the exact global minima of a two layer ReLUDNN is exponential in the input dimension and the number of hidden nodes . The exponential dependence on can not be removed unless ; see ShalevShwartz & BenDavid (2014); Blum & Rivest (1992); DasGupta et al. (1995). However, we are not aware of any complexity results which would rule out the possibility of an algorithm which trains to global optimality in time that is polynomial in the data size and/or the number of hidden nodes, assuming that the input dimension is a fixed constant. Resolving this dependence on network size would be another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good open question for future research, in our opinion. Perhaps an even better breakthrough would be to get optimal training algorithms for DNNs with two or more hidden layers and this seems like a substantially harder nut to crack. It would also be a significant breakthrough to get gap results between consecutive constant depths or between logarithmic and constant depths.
Acknowledgments
We would like to thank Christian Tjandraatmadja for pointing out a subtle error in a previous version of the paper, which affected the complexity results for the number of linear regions in our constructions in Section 3.2. Anirbit would like to thank Ramprasad Saptharishi, Piyush Srivastava and Rohit Gurjar for extensive discussions on Boolean and arithmetic circuit complexity. This paper has been immensely influenced by the perspectives gained during those extremely helpful discussions. Amitabh Basu gratefully acknowledges support from the NSF grant CMMI1452820. Raman Arora was supported in part by NSF BIGDATA grant IIS1546482.
References
 Allender (1998) Eric Allender. Complexity theory lecture notes. https://www.cs.rutgers.edu/~allender/lecture.notes/, 1998.
 Anthony & Bartlett (1999) Martin Anthony and Peter L. Bartlett. Neural network learning: Theoretical foundations. Cambridge University Press, 1999.
 Arora & Barak (2009) Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University Press, 2009.
 Blum & Rivest (1992) Avrim L. Blum and Ronald L. Rivest. Training a 3node neural network is npcomplete. Neural Networks, 5(1):117–127, 1992.

Cybenko (1989)
George Cybenko.
Approximation by superpositions of a sigmoidal function.
Mathematics of control, signals and systems, 2(4):303–314, 1989.  Dahl et al. (2013) George E. Dahl, Tara N. Sainath, and Geoffrey E. Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8609–8613. IEEE, 2013.
 DasGupta et al. (1995) Bhaskar DasGupta, Hava T. Siegelmann, and Eduardo Sontag. On the complexity of training neural networks with continuous activation functions. IEEE Transactions on Neural Networks, 6(6):1490–1504, 1995.
 Eldan & Shamir (2016) Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In 29th Annual Conference on Learning Theory, pp. 907–940, 2016.
 Goel et al. (2016) Surbhi Goel, Varun Kanade, Adam Klivans, and Justin Thaler. Reliably learning the relu in polynomial time. arXiv preprint arXiv:1611.10258, 2016.
 Goodfellow et al. (2013) Ian J Goodfellow, David WardeFarley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. arXiv preprint arXiv:1302.4389, 2013.
 Haeffele & Vidal (2015) Benjamin D. Haeffele and René Vidal. Global optimality in tensor factorization, deep learning, and beyond. arXiv preprint arXiv:1506.07540, 2015.
 Hanin (2017) Boris Hanin. Universal function approximation by deep neural nets with bounded width and relu activations. arXiv preprint arXiv:1708.02691, 2017.

Hastad (1986)
Johan Hastad.
Almost optimal lower bounds for small depth circuits.
In
Proceedings of the eighteenth annual ACM symposium on Theory of computing
, pp. 6–20. ACM, 1986.  Hinton et al. (2012) Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdelrahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.
 Hinton et al. (2006) Geoffrey E. Hinton, Simon Osindero, and YeeWhye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
 Hornik (1991) Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257, 1991.
 Jukna (2012) Stasys Jukna. Boolean function complexity: advances and frontiers, volume 27. Springer Science & Business Media, 2012.
 Kane & Williams (2015) Daniel M. Kane and Ryan Williams. Superlinear gate and superquadratic wire lower bounds for depthtwo and depththree threshold circuits. arXiv preprint arXiv:1511.07860, 2015.
 Kawaguchi (2016) Kenji Kawaguchi. Deep learning without poor local minima. arXiv preprint arXiv:1605.07110, 2016.
 Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.

Le (2013)
Quoc V. Le.
Building highlevel features using large scale unsupervised learning.
In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 8595–8598. IEEE, 2013.  LeCun et al. (2015) Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
 Liang & Srikant (2016) Shiyu Liang and R Srikant. Why deep neural networks for function approximation? 2016.
 Matousek (2002) Jiri Matousek. Lectures on discrete geometry, volume 212. Springer Science & Business Media, 2002.
 Montufar et al. (2014) Guido F. Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances in neural information processing systems, pp. 2924–2932, 2014.
 Pascanu et al. (2013) Razvan Pascanu, Guido Montufar, and Yoshua Bengio. On the number of response regions of deep feed forward networks with piecewise linear activations. arXiv preprint arXiv:1312.6098, 2013.
 Raghu et al. (2016) Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha SohlDickstein. On the expressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016.
 Razborov (1987) Alexander A. Razborov. Lower bounds on the size of bounded depth circuits over a complete basis with logical addition. Mathematical Notes, 41(4):333–338, 1987.
 Rossman et al. (2015) Benjamin Rossman, Rocco A. Servedio, and LiYang Tan. An averagecase depth hierarchy theorem for boolean circuits. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pp. 1030–1048. IEEE, 2015.
 Royden & Fitzpatrick (2010) H.L. Royden and P.M. Fitzpatrick. Real Analysis. Prentice Hall, 2010.
 Safran & Shamir (2017) Itay Safran and Ohad Shamir. Depthwidth tradeoffs in approximating natural functions with neural networks. In International Conference on Machine Learning, pp. 2979–2987, 2017.

Salakhutdinov & Hinton (2009)
Ruslan Salakhutdinov and Geoffrey E. Hinton.
Deep boltzmann machines.
In
International Conference on Artificial Intelligence and Statistics (AISTATS)
, volume 1, pp. 3, 2009.  Saptharishi (2014) R. Saptharishi. A survey of lower bounds in arithmetic circuit complexity, 2014.
 Sermanet et al. (2014) Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In International Conference on Learning Representations (ICLR 2014). arXiv preprint arXiv:1312.6229, 2014.
 Serra et al. (2017) Thiago Serra, Christian Tjandraatmadja, and Srikumar Ramalingam. Bounding and counting linear regions of deep neural networks. arXiv preprint arXiv:1711.02114, 2017.
 ShalevShwartz & BenDavid (2014) Shai ShalevShwartz and Shai BenDavid. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.
 Shamir (2016) Ohad Shamir. Distributionspecific hardness of learning neural networks. arXiv preprint arXiv:1609.01037, 2016.
 Shpilka & Yehudayoff (2010) Amir Shpilka and Amir Yehudayoff. Arithmetic circuits: A survey of recent results and open questions. Foundations and Trends® in Theoretical Computer Science, 5(3–4):207–388, 2010.
 Smolensky (1987) Roman Smolensky. Algebraic methods in the theory of lower bounds for boolean circuit complexity. In Proceedings of the nineteenth annual ACM symposium on Theory of computing, pp. 77–82. ACM, 1987.
 Srivastava et al. (2014) Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
 Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112, 2014.
 Telgarsky (2015) Matus Telgarsky. Representation benefits of deep feedforward networks. arXiv preprint arXiv:1509.08101, 2015.
 Telgarsky (2016) Matus Telgarsky. benefits of depth in neural networks. In 29th Annual Conference on Learning Theory, pp. 1517–1539, 2016.
 Wang (2004) Shuning Wang. General constructive representations for continuous piecewiselinear functions. IEEE Transactions on Circuits and Systems I: Regular Papers, 51(9):1889–1896, 2004.

Wang & Sun (2005)
Shuning Wang and Xusheng Sun.
Generalization of hinging hyperplanes.
IEEE Transactions on Information Theory, 51(12):4425–4431, 2005.  Yarotsky (2016) Dmitry Yarotsky. Error bounds for approximations with deep relu networks. arXiv preprint arXiv:1610.01145, 2016.
 Ziegler (1995) Günter M. Ziegler. Lectures on polytopes, volume 152. Springer Science & Business Media, 1995.
Appendix A Expressing piecewise linear functions using ReLU DNNs
Proof of Theorem 2.2.
Any continuous piecewise linear function which has pieces can be specified by three pieces of information, the slope of the left most piece, the coordinates of the nondifferentiable points specified by a tuple (indexed from left to right) and the slope of the rightmost piece. A tuple uniquely specifies a piecewise linear function from and vice versa. Given such a tuple, we construct a layer DNN which computes the same piecewise linear function.
One notes that for any , the function
(A.1) 
is equal to , which can be implemented by a 2layer ReLU DNN with size 1. Similarly, any function of the form,
(A.2) 
is equal to , which can be implemented by a 2layer ReLU DNN with size 1. The parameters will be called the slopes of the function, and will be called the breakpoint of the function.If we can write the given piecewise linear function as a sum of functions of the form (A.1) and (A.2), then by Lemma D.2 we would be done.
It turns out that such a decomposition of any piece PWL function as a sum of flaps can always be arranged where the breakpoints of the flaps all are all contained in the breakpoints of . First, observe that adding a constant to a function does not change the complexity of the ReLU DNN expressing it, since this corresponds to a bias on the output node. Thus, we will assume that the value of at the last break point is . We now use a single function of the form (A.1) with slope and breakpoint , and functions of the form (A.2) with slopes and breakpoints , respectively.
Thus, we wish to express .
Such a decomposition of would be valid if we can find values for such that the slope of the above sum is for , the slope of the above sum is for , and for each we have .
The above corresponds to asking for the existence of a solution to the following set of simultaneous linear equations in :
It is easy to verify that the above set of simultaneous linear equations has a unique solution. Indeed, must equal , and then one can solve for starting from the last equation and then back substitute to compute . The lower bound of on the size for any layer ReLU DNN that expresses a piece function follows from Lemma D.6. ∎
One can do better in terms of size when the rightmost piece of the given function is flat, i.e., . In this case , which means that ; thus, the decomposition of above is of size . A similar construction can be done when . This gives the following statement which will be useful for constructing our forthcoming hard functions.
Corollary A.1.
If the rightmost or leftmost piece of a piecewise linear function has slope, then we can compute such a piece function using a layer DNN with size .
Appendix B Benefits of Depth
b.1 Constructing a continuum of hard functions for ReLU DNNs at every depth and every width
Lemma B.1.
For any , , and , if we compose the functions the resulting function is a piecewise linear function with at most pieces, i.e.,
is piecewise linear with at most pieces, with of these pieces in the range (see Figure 2). Moreover, in each piece in the range , the function is affine with minimum value and maximum value .
Proof.
Simple induction on . ∎
Proof of Theorem 3.2.
Given and , choose any point
Comments
There are no comments yet.