Approximation capabilities of neural networks on unbounded domains

10/21/2019
by   Yang Qu, et al.
0

We prove universal approximation theorems of neural networks in L^p(R× [0, 1]^n), under the conditions that p ∈ [2, ∞) and that the activiation function belongs to among others a monotone sigmoid, relu, elu, softplus or leaky relu. Our results partially generalize classical universal approximation theorems on [0,1]^n.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2021

Computation complexity of deep ReLU neural networks in high-dimensional approximation

The purpose of the present paper is to study the computation complexity ...
research
07/29/2023

Discrete neural nets and polymorphic learning

Theorems from universal algebra such as that of Murskiĭ from the 1970s h...
research
07/16/2022

Approximation Capabilities of Neural Networks using Morphological Perceptrons and Generalizations

Standard artificial neural networks (ANNs) use sum-product or multiply-a...
research
02/04/2021

Universal Approximation Theorems of Fully Connected Binarized Neural Networks

Neural networks (NNs) are known for their high predictive accuracy in co...
research
03/20/2023

Quantile and moment neural networks for learning functionals of distributions

We study news neural networks to approximate function of distributions i...
research
10/02/2019

The option pricing model based on time values: an application of the universal approximation theory on unbounded domains

Hutchinson, Lo and Poggio raised the question that if learning works can...

Please sign up or login with your details

Forgot password? Click here to reset