Understanding Deep Neural Networks with Rectified Linear Units

11/04/2016
by   Raman Arora, et al.
0

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give the first-ever polynomial time (in the size of data) algorithm to train to global optimality a ReLU DNN with one hidden layer, assuming the input dimension and number of nodes of the network as fixed constants. We also improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of "hard" functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number k there exists a function representable by a ReLU DNN with k^2 hidden layers and total size k^3, such that any ReLU DNN with at most k hidden layers will require at least 1/2k^k+1-1 total nodes. Finally, we construct a family of R^n→R piecewise linear functions for n≥ 2 (also smoothly parameterized), whose number of affine pieces scales exponentially with the dimension n at any fixed size and depth. To the best of our knowledge, such a construction with exponential dependence on n has not been achieved by previous families of "hard" functions in the neural nets literature. This construction utilizes the theory of zonotopes from polyhedral theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2020

In Proximity of ReLU DNN, PWA Function, and Explicit MPC

Rectifier (ReLU) deep neural networks (DNN) and their connection with pi...
research
05/27/2019

Equivalent and Approximate Transformations of Deep Neural Networks

Two networks are equivalent if they produce the same output for any give...
research
12/13/2021

A Complete Characterisation of ReLU-Invariant Distributions

We give a complete characterisation of families of probability distribut...
research
05/18/2021

ReLU Deep Neural Networks and linear Finite Elements

In this paper, we investigate the relationship between deep neural netwo...
research
05/31/2021

Towards Lower Bounds on the Depth of ReLU Neural Networks

We contribute to a better understanding of the class of functions that i...
research
09/28/2020

Learning Deep ReLU Networks Is Fixed-Parameter Tractable

We consider the problem of learning an unknown ReLU network with respect...
research
02/14/2016

Benefits of depth in neural networks

For any positive integer k, there exist neural networks with Θ(k^3) laye...

Please sign up or login with your details

Forgot password? Click here to reset