NN2Poly: A polynomial representation for deep feed-forward artificial neural networks

12/21/2021
by   Pablo Morala, et al.
0

Interpretability of neural networks and their underlying theoretical behaviour remain being an open field of study, even after the great success of their practical applications, particularly with the emergence of deep learning. In this work, NN2Poly is proposed: a theoretical approach that allows to obtain polynomials that provide an alternative representation of an already trained deep neural network. This extends the previous idea proposed in arXiv:2102.03865, which was limited to single hidden layer neural networks, to work with arbitrarily deep feed-forward neural networks in both regression and classification tasks. The objective of this paper is achieved by using a Taylor expansion on the activation function, at each layer, and then using several combinatorial properties that allow to identify the coefficients of the desired polynomials. The main computational limitations when implementing this theoretical method are discussed and it is presented an example of the constraints on the neural network weights that are necessary for NN2Poly to work. Finally, some simulations are presented were it is concluded that using NN2Poly it is possible to obtain a representation for the given neural network with low error between the obtained predictions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2020

Linear discriminant initialization for feed-forward neural networks

Informed by the basic geometry underlying feed forward neural networks, ...
research
02/07/2021

Towards a mathematical framework to inform Neural Network modelling via Polynomial Regression

Even when neural networks are widely used in a large number of applicati...
research
05/29/2019

On the Expressive Power of Deep Polynomial Neural Networks

We study deep neural networks with polynomial activations, particularly ...
research
07/30/2020

Random Vector Functional Link Networks for Function Approximation on Manifolds

The learning speed of feed-forward neural networks is notoriously slow a...
research
06/14/2016

Neural Networks and Continuous Time

The fields of neural computation and artificial neural networks have dev...
research
03/16/2018

Deep Component Analysis via Alternating Direction Neural Networks

Despite a lack of theoretical understanding, deep neural networks have a...
research
03/21/2022

Origami in N dimensions: How feed-forward networks manufacture linear separability

Neural networks can implement arbitrary functions. But, mechanistically,...

Please sign up or login with your details

Forgot password? Click here to reset