Embedding Inequalities for Barron-type Spaces
One of the fundamental problems in deep learning theory is understanding the approximation and generalization properties of two-layer neural networks in high dimensions. In order to tackle this issue, researchers have introduced the Barron space ℬ_s(Ω) and the spectral Barron space ℱ_s(Ω), where the index s characterizes the smoothness of functions within these spaces and Ω⊂ℝ^d represents the input domain. However, it is still not clear what is the relationship between the two types of Barron spaces. In this paper, we establish continuous embeddings between these spaces as implied by the following inequality: for any δ∈ (0,1), s∈ℕ^+ and f: Ω↦ℝ, it holds that δγ^δ-s_Ωf_ℱ_s-δ(Ω)≲_s f_ℬ_s(Ω)≲_s f_ℱ_s+1(Ω), where γ_Ω=sup_v_2=1,x∈Ω|v^Tx| and notably, the hidden constants depend solely on the value of s. Furthermore, we provide examples to demonstrate that the lower bound is tight.
READ FULL TEXT