On the space of coefficients of a Feed Forward Neural Network

09/07/2021
by   Dinesh Valluri, et al.
0

We define and establish the conditions for `equivalent neural networks' - neural networks with different weights, biases, and threshold functions that result in the same associated function. We prove that given a neural network 𝒩 with piece-wise linear activation, the space of coefficients describing all equivalent neural networks is given by a semialgebraic set. This result is obtained by studying different representations of a given piece-wise linear function using the Tarski-Seidenberg theorem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2020

Linear discriminant initialization for feed-forward neural networks

Informed by the basic geometry underlying feed forward neural networks, ...
research
05/03/2017

Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

We present an approach for the verification of feed-forward neural netwo...
research
03/15/2021

Representation Theorem for Matrix Product States

In this work, we investigate the universal representation capacity of th...
research
11/27/2018

Knots in random neural networks

The weights of a neural network are typically initialized at random, and...
research
08/02/2018

Approximate Probabilistic Neural Networks with Gated Threshold Logic

Probabilistic Neural Network (PNN) is a feed-forward artificial neural n...
research
06/21/2020

Affine Symmetries and Neural Network Identifiability

We address the following question of neural network identifiability: Sup...
research
06/09/2023

Deterministic equivalent of the Conjugate Kernel matrix associated to Artificial Neural Networks

We study the Conjugate Kernel associated to a multi-layer linear-width f...

Please sign up or login with your details

Forgot password? Click here to reset