Computation complexity of deep ReLU neural networks in high-dimensional approximation

03/01/2021 ∙ by Dinh Dũng, et al. ∙ 0

The purpose of the present paper is to study the computation complexity of deep ReLU neural networks to approximate functions in Hölder-Nikol'skii spaces of mixed smoothness H_∞^α(𝕀^d) on the unit cube 𝕀^d:=[0,1]^d. In this context, for any function f∈ H_∞^α(𝕀^d), we explicitly construct nonadaptive and adaptive deep ReLU neural networks having an output that approximates f with a prescribed accuracy ε, and prove dimension-dependent bounds for the computation complexity of this approximation, characterized by the size and the depth of this deep ReLU neural network, explicitly in d and ε. Our results show the advantage of the adaptive method of approximation by deep ReLU neural networks over nonadaptive one.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.