Sparse-grid sampling recovery and deep ReLU neural networks in high-dimensional approximation

07/17/2020 ∙ by Dinh Dũng, et al. ∙ 0

We investigate approximations of functions from the Hölder-Zygmund space of mixed smoothness H^α_∞(𝕀^d) defined on the d-dimensional unit cube 𝕀^d:=[0,1]^d, by linear algorithms of sparse-grid sampling recovery and by deep ReLU (Rectified Linear Unit) neural networks when the dimension d may be very large. The approximation error is measured in the norm of the isotropic Sobolev space W^1,p_0(𝕀^d). The optimality of this sampling recovery is studied in terms of sampling n-widths. Optimal linear sampling algorithms are constructed on sparse grids using the piece-wise linear B-spline interpolation representation. We prove some tight dimension-dependent bounds of the sampling n-widths explicit in d and n. Based on the results on sampling recovery, we investigate the expressive power of deep ReLU neural networks to approximate functions in Hölder-Zygmund space. Namely, for any function f∈ H^α_∞(𝕀^d), we explicitly construct a deep ReLU neural network having an output that approximates f in the W_0^1,p(𝕀^d)-norm with a prescribed accuracy ε, and prove tight dimension-dependent bounds of the computation complexity of this approximation, characterized as the number of weights and the depth of this deep ReLU neural network, explicitly in d and ε. Moreover, we show that under a certain restriction the curse of dimensionality can be avoided in the approximations by sparse-grid sampling recovery and deep ReLU neural networks.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.