Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes

02/24/2023
by   Christian Haase, et al.
0

We prove that the set of functions representable by ReLU neural networks with integer weights strictly increases with the network depth while allowing arbitrary width. More precisely, we show that ⌈log_2(n)⌉ hidden layers are indeed necessary to compute the maximum of n numbers, matching known upper bounds. Our results are based on the known duality between neural networks and Newton polytopes via tropical geometry. The integrality assumption implies that these Newton polytopes are lattice polytopes. Then, our depth lower bounds follow from a parity argument on the normalized volume of faces of such polytopes.

READ FULL TEXT
research
05/31/2021

Towards Lower Bounds on the Depth of ReLU Neural Networks

We contribute to a better understanding of the class of functions that i...
research
05/24/2022

Approximation speed of quantized vs. unquantized ReLU neural networks and beyond

We consider general approximation families encompassing ReLU neural netw...
research
02/10/2022

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

We give superpolynomial statistical query (SQ) lower bounds for learning...
research
11/27/2020

Tight Hardness Results for Training Depth-2 ReLU Networks

We prove several hardness results for training depth-2 neural networks w...
research
03/02/2022

CNF Encodings of Parity

The minimum number of clauses in a CNF representation of the parity func...
research
07/13/2020

Probabilistic bounds on data sensitivity in deep rectifier networks

Neuron death is a complex phenomenon with implications for model trainab...
research
02/03/2021

On the Approximation Power of Two-Layer Networks of Random ReLUs

This paper considers the following question: how well can depth-two ReLU...

Please sign up or login with your details

Forgot password? Click here to reset