Activation Functions Not To Active: A Plausible Theory on Interpreting Neural Networks

05/01/2023
by   John Chiang, et al.
0

Researchers commonly believe that neural networks model a high-dimensional space but cannot give a clear definition of this space. What is this space? What is its dimension? And does it has finite dimensions? In this paper, we develop a plausible theory on interpreting neural networks in terms of the role of activation functions in neural networks and define a high-dimensional (more precisely, an infinite-dimensional) space that neural networks including deep-learning networks could create. We show that the activation function acts as a magnifying function that maps the low-dimensional linear space into an infinite-dimensional space, which can distinctly identify the polynomial approximation of any multivariate continuous function of the variable values being the same features of the given dataset. Given a dataset with each example of d features f_1, f_2, ⋯, f_d, we believe that neural networks model a special space with infinite dimensions, each of which is a monomial ∏_i_1, i_2, ⋯, i_d f_1^i_1 f_2^i_2⋯ f_d^i_d for some non-negative integers i_1, i_2, ⋯, i_d∈ℤ_0^+={0,1,2,3,…}. We term such an infinite-dimensional space a Super Space (SS). We see such a dimension as the minimum information unit. Every neuron node previously through an activation layer in neural networks is a Super Plane (SP), which is actually a polynomial of infinite degree. This Super Space is something like a coordinate system, in which every multivalue function can be represented by a Super Plane. We also show that training NNs could at least be reduced to solving a system of nonlinear equations. equations

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2019

Efficient approximation of high-dimensional functions with deep neural networks

In this paper, we develop an approximation theory for deep neural networ...
research
07/21/2020

Activation function dependence of the storage capacity of treelike neural networks

The expressive power of artificial neural networks crucially depends on ...
research
05/15/2020

A New Activation Function for Training Deep Neural Networks to Avoid Local Minimum

Activation functions have a major role to play and hence are very import...
research
04/02/2019

On Geometric Structure of Activation Spaces in Neural Networks

In this paper, we investigate the geometric structure of activation spac...
research
11/05/2020

Identifying and interpreting tuning dimensions in deep networks

In neuroscience, a tuning dimension is a stimulus attribute that account...
research
12/26/2022

On the Level Sets and Invariance of Neural Tuning Landscapes

Visual representations can be defined as the activations of neuronal pop...
research
08/02/2022

Lossy compression of multidimensional medical images using sinusoidal activation networks: an evaluation study

In this work, we evaluate how neural networks with periodic activation f...

Please sign up or login with your details

Forgot password? Click here to reset