A Framework for the construction of upper bounds on the number of affine linear regions of ReLU feed-forward neural networks

06/05/2018
by   Peter Hinz, et al.
0

In this work we present a new framework to derive upper bounds on the number regions of feed-forward neural nets with ReLU activation functions. We derive all existing such bounds as special cases, however in a different representation in terms of matrices. This provides new insight and allows a more detailed analysis of the corresponding bounds. In particular, we provide a Jordan-like decomposition for the involved matrices and present new tighter results for an asymptotic setting. Moreover, new even stronger bounds may be obtained from our framework.

READ FULL TEXT
research
03/31/2021

Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks

Several current bounds on the maximal number of affine regions of a ReLU...
research
06/22/2017

An approach to reachability analysis for feed-forward ReLU neural networks

We study the reachability problem for systems implemented as feed-forwar...
research
06/11/2020

Tangent Space Sensitivity and Distribution of Linear Regions in ReLU Networks

Recent articles indicate that deep neural networks are efficient models ...
research
06/30/2020

Deriving Neural Network Design and Learning from the Probabilistic Framework of Chain Graphs

The last decade has witnessed a boom of neural network (NN) research and...
research
04/27/2016

New Bounds for Hypergeometric Creative Telescoping

Based on a modified version of Abramov-Petkovšek reduction, a new algori...
research
04/29/2021

Analytical bounds on the local Lipschitz constants of ReLU networks

In this paper, we determine analytical upper bounds on the local Lipschi...
research
07/23/2020

Hierarchical Verification for Adversarial Robustness

We introduce a new framework for the exact point-wise ℓ_p robustness ver...

Please sign up or login with your details

Forgot password? Click here to reset