On decision regions of narrow deep neural networks

07/03/2018
by   Hans-Peter Beise, et al.
1

We show that for neural network functions that have width less or equal to the input dimension all connected components of decision regions are unbounded. The result holds for continuous and strictly monotonic activation functions as well as for ReLU activation. This complements recent results on approximation capabilities of [Hanin 2017 Approximating] and connectivity of decision regions of [Nguyen 2018 Neural] for such narrow neural networks. Further, we give an example that negatively answers the question posed in [Nguyen 2018 Neural] whether one of their main results still holds for ReLU activation. Our results are illustrated by means of numerical experiments.

READ FULL TEXT
research
01/25/2019

When Can Neural Networks Learn Connected Decision Regions?

Previous work has questioned the conditions under which the decision reg...
research
02/28/2018

Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions

In the recent literature the important role of depth in deep learning ha...
research
05/14/2015

Neural Network with Unbounded Activation Functions is Universal Approximator

This paper presents an investigation of the approximation property of ne...
research
08/15/2018

Collapse of Deep and Narrow Neural Nets

Recent theoretical work has demonstrated that deep neural networks have ...
research
09/04/2017

Optimal deep neural networks for sparse recovery via Laplace techniques

This paper introduces Laplace techniques for designing a neural network,...
research
04/05/2021

Deep neural network approximation of analytic functions

We provide an entropy bound for the spaces of neural networks with piece...
research
07/01/2021

On the Expected Complexity of Maxout Networks

Learning with neural networks relies on the complexity of the representa...

Please sign up or login with your details

Forgot password? Click here to reset