Fixed points of arbitrarily deep 1-dimensional neural networks

03/22/2023
by   Andrew Cook, et al.
0

In this paper, we introduce a new class of functions on ℝ that is closed under composition, and contains the logistic sigmoid function. We use this class to show that any 1-dimensional neural network of arbitrary depth with logistic sigmoid activation functions has at most three fixed points. While such neural networks are far from real world applications, we are able to completely understand their fixed points, providing a foundation to the much needed connection between application and theory of deep neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset