Quantitative Rates and Fundamental Obstructions to Non-Euclidean Universal Approximation with Deep Narrow Feed-Forward Networks

01/13/2021
by   Anastasis Kratsios, et al.
0

By incorporating structured pairs of non-trainable input and output layers, the universal approximation property of feed-forward have recently been extended across a broad range of non-Euclidean input spaces X and output spaces Y. We quantify the number of narrow layers required for these "deep geometric feed-forward neural networks" (DGNs) to approximate any continuous function in C(X,Y), uniformly on compacts. The DGN architecture is then extended to accommodate complete Riemannian manifolds, where the input and output layers are only defined locally, and we obtain local analogs of our results. In this case, we find that both the global and local universal approximation guarantees can only coincide when approximating null-homotopic functions. Consequently, we show that if Y is a compact Riemannian manifold, then there exists a function that cannot be uniformly approximated on large compact subsets of X. Nevertheless, we obtain lower-bounds of the maximum diameter of any geodesic ball in X wherein our local universal approximation results hold. Applying our results, we build universal approximators between spaces of non-degenerate Gaussian measures. We also obtain a quantitative version of the universal approximation theorem for classical deep narrow feed-forward networks with general activation functions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset