Towards Lower Bounds on the Depth of ReLU Neural Networks

05/31/2021
by   Christoph Hertrich, et al.
0

We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). This problem has potential impact on algorithmic and statistical aspects because of the insight it provides into the class of functions represented by neural hypothesis classes. However, to the best of our knowledge, this question has not been investigated in the neural network literature. We also present upper bounds on the sizes of neural networks required to represent functions in these neural hypothesis classes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2023

Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes

We prove that the set of functions representable by ReLU neural networks...
research
06/07/2020

Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth

We prove sharp dimension-free representation results for neural networks...
research
11/04/2016

Understanding Deep Neural Networks with Rectified Linear Units

In this paper we investigate the family of functions representable by de...
research
11/15/2021

Neural networks with linear threshold activations: structure and algorithms

In this article we present new results on neural networks with linear th...
research
11/08/2017

Lower bounds over Boolean inputs for deep neural networks with ReLU gates

Motivated by the resurgence of neural networks in being able to solve co...
research
07/28/2023

Weighted variation spaces and approximation by shallow ReLU networks

We investigate the approximation of functions f on a bounded domain Ω⊂ℝ^...
research
12/02/2022

On Solution Functions of Optimization: Universal Approximation and Covering Number Bounds

We study the expressibility and learnability of convex optimization solu...

Please sign up or login with your details

Forgot password? Click here to reset