On L_2-approximation in Hilbert spaces using function values

05/07/2019
by   David Krieg, et al.
0

We study L_2-approximation of functions from Hilbert spaces H in which function evaluation is a continuous linear functional, using function values as information. Under certain assumptions on H, we prove that the n-th minimal worst-case error e_n satisfies e_n ≲ a_n/(n), where a_n is the n-th minimal worst-case error for algorithms using arbitrary linear information, i.e., the n-th approximation number. Our result applies, in particular, to Sobolev spaces with dominating mixed smoothness H=H^s_ mix(T^d) with s>1/2 and we obtain e_n ≲ n^-s^sd(n). This improves upon previous bounds whenever d>2s+1.

READ FULL TEXT
research
11/03/2020

Function values are enough for L_2-approximation: Part II

In the first part we have shown that, for L_2-approximation of functions...
research
01/30/2023

Infinite-Variate L^2-Approximation with Nested Subspace Sampling

We consider L^2-approximation on weighted reproducing kernel Hilbert spa...
research
03/28/2023

Worst case tractability of linear problems in the presence of noise: linear information

We study the worst case tractability of multivariate linear problems def...
research
09/05/2019

Multiple Lattice Rules for Multivariate L_∞ Approximation in the Worst-Case Setting

We develop a general framework for estimating the L_∞(T^d) error for the...
research
08/26/2023

Worst case tractability of L_2-approximation for weighted Korobov spaces

We study L_2-approximation problems APP_d in the worst case setting in t...
research
01/24/2020

How anisotropic mixed smoothness affects the decay of singular numbers of Sobolev embeddings

We continue the research on the asymptotic and preasymptotic decay of si...
research
06/05/2020

Learning from Non-IID Data in Hilbert Spaces: An Optimal Recovery Perspective

The notion of generalization in classical Statistical Learning is often ...

Please sign up or login with your details

Forgot password? Click here to reset