Computation complexity of deep ReLU neural networks in high-dimensional approximation
The purpose of the present paper is to study the computation complexity of deep ReLU neural networks to approximate functions in Hölder-Nikol'skii spaces of mixed smoothness H_∞^α(𝕀^d) on the unit cube 𝕀^d:=[0,1]^d. In this context, for any function f∈ H_∞^α(𝕀^d), we explicitly construct nonadaptive and adaptive deep ReLU neural networks having an output that approximates f with a prescribed accuracy ε, and prove dimension-dependent bounds for the computation complexity of this approximation, characterized by the size and the depth of this deep ReLU neural network, explicitly in d and ε. Our results show the advantage of the adaptive method of approximation by deep ReLU neural networks over nonadaptive one.
READ FULL TEXT