Approximation with Neural Networks in Variable Lebesgue Spaces

07/08/2020
by   Angela Capel, et al.
0

This paper concerns the universal approximation property with neural networks in variable Lebesgue spaces. We show that, whenever the exponent function of the space is bounded, every function can be approximated with shallow neural networks with any desired accuracy. This result subsequently leads to determine the universality of the approximation depending on the boundedness of the exponent function. Furthermore, whenever the exponent is unbounded, we obtain some characterization results for the subspace of functions that can be approximated.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2019

Approximation of functions by neural networks

We study the approximation of measurable functions on the hypercube by f...
research
07/09/2020

Expressivity of Deep Neural Networks

In this review paper, we give a comprehensive overview of the large vari...
research
07/28/2023

Optimal Approximation of Zonoids and Uniform Approximation by Shallow Neural Networks

We study the following two related problems. The first is to determine t...
research
09/23/2020

Estimation error analysis of deep learning on the regression problem on the variable exponent Besov space

Deep learning has achieved notable success in various fields, including ...
research
06/21/2021

Approximation capabilities of measure-preserving neural networks

Measure-preserving neural networks are well-developed invertible models,...
research
01/29/2021

Optimal Approximation Rates and Metric Entropy of ReLU^k and Cosine Networks

This article addresses several fundamental issues associated with the ap...
research
05/12/2022

Predictability Exponent of Stochastic Dynamical Systems

Predicting the trajectory of stochastic dynamical systems (SDSs) is an i...

Please sign up or login with your details

Forgot password? Click here to reset