Fixed points of arbitrarily deep 1-dimensional neural networks

03/22/2023
by   Andrew Cook, et al.
0

In this paper, we introduce a new class of functions on ℝ that is closed under composition, and contains the logistic sigmoid function. We use this class to show that any 1-dimensional neural network of arbitrary depth with logistic sigmoid activation functions has at most three fixed points. While such neural networks are far from real world applications, we are able to completely understand their fixed points, providing a foundation to the much needed connection between application and theory of deep neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2019

Efficient approximation of high-dimensional functions with deep neural networks

In this paper, we develop an approximation theory for deep neural networ...
research
11/20/2018

Fenchel Lifted Networks: A Lagrange Relaxation of Neural Network Training

Despite the recent successes of deep neural networks, the corresponding ...
research
04/08/2018

Comparison of non-linear activation functions for deep neural networks on MNIST classification task

Activation functions play a key role in neural networks so it becomes fu...
research
06/22/2020

Bidirectional Self-Normalizing Neural Networks

The problem of exploding and vanishing gradients has been a long-standin...
research
06/30/2021

Fixed points of monotonic and (weakly) scalable neural networks

We derive conditions for the existence of fixed points of neural network...
research
02/17/2020

Investigating the Compositional Structure Of Deep Neural Networks

The current understanding of deep neural networks can only partially exp...
research
09/30/2018

Deep, Skinny Neural Networks are not Universal Approximators

In order to choose a neural network architecture that will be effective ...

Please sign up or login with your details

Forgot password? Click here to reset