When Can Neural Networks Learn Connected Decision Regions?

01/25/2019
by   Trung Le, et al.
0

Previous work has questioned the conditions under which the decision regions of a neural network are connected and further showed the implications of the corresponding theory to the problem of adversarial manipulation of classifiers. It has been proven that for a class of activation functions including leaky ReLU, neural networks having a pyramidal structure, that is no layer has more hidden units than the input dimension, produce necessarily connected decision regions. In this paper, we advance this important result by further developing the sufficient and necessary conditions under which the decision regions of a neural network are connected. We then apply our framework to overcome the limits of existing work and further study the capacity to learn connected regions of neural networks for a much wider class of activation functions including those widely used, namely ReLU, sigmoid, tanh, softlus, and exponential linear function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2018

Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions

In the recent literature the important role of depth in deep learning ha...
research
07/03/2018

On decision regions of narrow deep neural networks

We show that for neural network functions that have width less or equal ...
research
05/04/2022

Most Activation Functions Can Win the Lottery Without Excessive Depth

The strong lottery ticket hypothesis has highlighted the potential for t...
research
08/20/2020

On transversality of bent hyperplane arrangements and the topological expressiveness of ReLU neural networks

Let F:R^n -> R be a feedforward ReLU neural network. It is well-known th...
research
12/18/2019

Adversarial VC-dimension and Sample Complexity of Neural Networks

Adversarial attacks during the testing phase of neural networks pose a c...
research
10/02/2018

GINN: Geometric Illustration of Neural Networks

This informal technical report details the geometric illustration of dec...
research
05/17/2021

How to Explain Neural Networks: A perspective of data space division

Interpretability of intelligent algorithms represented by deep learning ...

Please sign up or login with your details

Forgot password? Click here to reset