Compositional Sparsity, Approximation Classes, and Parametric Transport Equations
Approximating functions of a large number of variables poses particular challenges often subsumed under the term "Curse of Dimensionality". Unless the approximated function exhibits a very high level of smoothness the Curse can be avoided only by exploiting some typically hidden structural sparsity. In this paper we propose a general framework for model classes of functions in high dimensions based on suitable notions of compositional sparsity quantifying approximability by highly nonlinear expressions such as deep neural networks. The relevance of these concepts are demonstrated for solution manifolds of parametric transport equations which are known not to enjoy the type of high order regularity of parameter-to-solution maps that help to avoid the Curse of Dimenbsionality in other model scenarios. Compositional sparsity is shown to serve as the key mechanism for proving that sparsity of problem data is inherited in a quantifiable way by the solution manifold. In particular, one obtains convergence rates for deep neural network realizations showing that the Curse of Dimensionality is indeed avoided.
READ FULL TEXT