Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks
Several current bounds on the maximal number of affine regions of a ReLU feed-forward neural network are special cases of the framework [1] which relies on layer-wise activation histogram bounds. We analyze and partially solve a problem in algebraic topology the solution of which would fully exploit this framework. Our partial solution already induces slightly tighter bounds and suggests insight in how parameter initialization methods can affect the number of regions. Furthermore, we extend the framework to allow the composition of subnetwork instead of layer-wise activation histogram bounds to reduce the number of required compositions which negatively affect the tightness of the resulting bound.
READ FULL TEXT