Activation Functions Not To Active: A Plausible Theory on Interpreting Neural Networks

05/01/2023
by   John Chiang, et al.
0

Researchers commonly believe that neural networks model a high-dimensional space but cannot give a clear definition of this space. What is this space? What is its dimension? And does it has finite dimensions? In this paper, we develop a plausible theory on interpreting neural networks in terms of the role of activation functions in neural networks and define a high-dimensional (more precisely, an infinite-dimensional) space that neural networks including deep-learning networks could create. We show that the activation function acts as a magnifying function that maps the low-dimensional linear space into an infinite-dimensional space, which can distinctly identify the polynomial approximation of any multivariate continuous function of the variable values being the same features of the given dataset. Given a dataset with each example of d features f_1, f_2, ⋯, f_d, we believe that neural networks model a special space with infinite dimensions, each of which is a monomial ∏_i_1, i_2, ⋯, i_d f_1^i_1 f_2^i_2⋯ f_d^i_d for some non-negative integers i_1, i_2, ⋯, i_d∈ℤ_0^+={0,1,2,3,…}. We term such an infinite-dimensional space a Super Space (SS). We see such a dimension as the minimum information unit. Every neuron node previously through an activation layer in neural networks is a Super Plane (SP), which is actually a polynomial of infinite degree. This Super Space is something like a coordinate system, in which every multivalue function can be represented by a Super Plane. We also show that training NNs could at least be reduced to solving a system of nonlinear equations. equations

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset