No one-hidden-layer neural network can represent multivariable functions

06/19/2020
by   Masayo Inoue, et al.
0

In a function approximation with a neural network, an input dataset is mapped to an output index by optimizing the parameters of each hidden-layer unit. For a unary function, we present constraints on the parameters and its second derivative by constructing a continuum version of a one-hidden-layer neural network with the rectified linear unit (ReLU) activation function. The network is accurately implemented because the constraints decrease the degrees of freedom of the parameters. We also explain the existence of a smooth binary function that cannot be precisely represented by any such neural network.

READ FULL TEXT

page 5

page 9

research
04/19/2023

Points of non-linearity of functions generated by random neural networks

We consider functions from the real numbers to the real numbers, output ...
research
07/31/2020

The Kolmogorov-Arnold representation theorem revisited

There is a longstanding debate whether the Kolmogorov-Arnold representat...
research
09/12/2013

Modeling Based on Elman Wavelet Neural Network for Class-D Power Amplifiers

In Class-D Power Amplifiers (CDPAs), the power supply noise can intermod...
research
07/19/2023

A New Computationally Simple Approach for Implementing Neural Networks with Output Hard Constraints

A new computationally simple method of imposing hard convex constraints ...
research
03/22/2018

Learning through deterministic assignment of hidden parameters

Supervised learning frequently boils down to determining hidden and brig...
research
12/05/2020

A three layer neural network can represent any discontinuous multivariate function

In 1987, Hecht-Nielsen showed that any continuous multivariate function ...
research
12/28/2021

Reduced Softmax Unit for Deep Neural Network Accelerators

The Softmax activation layer is a very popular Deep Neural Network (DNN)...

Please sign up or login with your details

Forgot password? Click here to reset