Fixed Points of Cone Mapping with the Application to Neural Networks

07/20/2022
by   Grzegorz Gabor, et al.
0

We derive conditions for the existence of fixed points of cone mappings without assuming scalability of functions. Monotonicity and scalability are often inseparable in the literature in the context of searching for fixed points of interference mappings. In applications, such mappings are approximated by non-negative neural networks. It turns out, however, that the process of training non-negative networks requires imposing an artificial constraint on the weights of the model. However, in the case of specific non-negative data, it cannot be said that if the mapping is non-negative, it has only non-negative weights. Therefore, we considered the problem of the existence of fixed points for general neural networks, assuming the conditions of tangency conditions with respect to specific cones. This does not relax the physical assumptions, because even assuming that the input and output are to be non-negative, the weights can have (small, but) less than zero values. Such properties (often found in papers on the interpretability of weights of neural networks) lead to the weakening of the assumptions about the monotonicity or scalability of the mapping associated with the neural network. To the best of our knowledge, this paper is the first to study this phenomenon.

READ FULL TEXT
research
05/18/2018

Knowledge Discovery from Layered Neural Networks based on Non-negative Task Decomposition

Interpretability has become an important issue in the machine learning f...
research
07/18/2022

Non-negative Least Squares via Overparametrization

In many applications, solutions of numerical problems are required to be...
research
08/05/2022

Why do networks have inhibitory/negative connections?

Why do brains have inhibitory connections? Why do deep networks have neg...
research
04/19/2023

The Responsibility Problem in Neural Networks with Unordered Targets

We discuss the discontinuities that arise when mapping unordered objects...
research
04/20/2018

Metrics that respect the support

In this work we explore the family of metrics determined by S-weights, i...
research
03/21/2016

A Discontinuous Neural Network for Non-Negative Sparse Approximation

This paper investigates a discontinuous neural network which is used as ...
research
03/02/2021

Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks

We consider the problem of finding a two-layer neural network with sigmo...

Please sign up or login with your details

Forgot password? Click here to reset