DeepAI AI Chat
Log In Sign Up

Algorithmic Determination of the Combinatorial Structure of the Linear Regions of ReLU Neural Networks

by   Marissa Masden, et al.
University of Oregon

We algorithmically determine the regions and facets of all dimensions of the canonical polyhedral complex, the universal object into which a ReLU network decomposes its input space. We show that the locations of the vertices of the canonical polyhedral complex along with their signs with respect to layer maps determine the full facet structure across all dimensions. We present an algorithm which calculates this full combinatorial structure, making use of our theorems that the dual complex to the canonical polyhedral complex is cubical and it possesses a multiplication compatible with its facet structure. The resulting algorithm is numerically stable, polynomial time in the number of intermediate neurons, and obtains accurate information across all dimensions. This permits us to obtain, for example, the true topology of the decision boundaries of networks with low-dimensional inputs. We run empirics on such networks at initialization, finding that width alone does not increase observed topology, but width in the presence of depth does. Source code for our algorithms is accessible online at


The Role of Depth, Width, and Activation Complexity in the Number of Linear Regions of Neural Networks

Many feedforward neural networks generate continuous and piecewise-linea...

On transversality of bent hyperplane arrangements and the topological expressiveness of ReLU neural networks

Let F:R^n -> R be a feedforward ReLU neural network. It is well-known th...

Arbitrary-Depth Universal Approximation Theorems for Operator Neural Networks

The standard Universal Approximation Theorem for operator neural network...

Achieve the Minimum Width of Neural Networks for Universal Approximation

The universal approximation property (UAP) of neural networks is fundame...

Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected ReLU Networks on Initialization

Stacking many layers to create truly deep neural networks is arguably wh...