Approximation capabilities of neural networks on unbounded domains
We prove universal approximation theorems of neural networks in L^p(R× [0, 1]^n), under the conditions that p ∈ [2, ∞) and that the activiation function belongs to among others a monotone sigmoid, relu, elu, softplus or leaky relu. Our results partially generalize classical universal approximation theorems on [0,1]^n.
READ FULL TEXT