DeepAI AI Chat
Log In Sign Up

Quantitative Rates and Fundamental Obstructions to Non-Euclidean Universal Approximation with Deep Narrow Feed-Forward Networks

01/13/2021
by   Anastasis Kratsios, et al.
0

By incorporating structured pairs of non-trainable input and output layers, the universal approximation property of feed-forward have recently been extended across a broad range of non-Euclidean input spaces X and output spaces Y. We quantify the number of narrow layers required for these "deep geometric feed-forward neural networks" (DGNs) to approximate any continuous function in C(X,Y), uniformly on compacts. The DGN architecture is then extended to accommodate complete Riemannian manifolds, where the input and output layers are only defined locally, and we obtain local analogs of our results. In this case, we find that both the global and local universal approximation guarantees can only coincide when approximating null-homotopic functions. Consequently, we show that if Y is a compact Riemannian manifold, then there exists a function that cannot be uniformly approximated on large compact subsets of X. Nevertheless, we obtain lower-bounds of the maximum diameter of any geodesic ball in X wherein our local universal approximation results hold. Applying our results, we build universal approximators between spaces of non-degenerate Gaussian measures. We also obtain a quantitative version of the universal approximation theorem for classical deep narrow feed-forward networks with general activation functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/03/2020

Non-Euclidean Universal Approximation

Modifications to a neural network's input and output layers are often re...
01/31/2022

Metric Hypertransformers are Universal Adapted Maps

We introduce a universal class of geometric deep learning models, called...
05/17/2021

Universal Regular Conditional Distributions

We introduce a general framework for approximating regular conditional d...
10/21/2018

Transition-based Parsing with Lighter Feed-Forward Networks

We explore whether it is possible to build lighter parsers, that are sta...
04/24/2023

A Transfer Principle: Universal Approximators Between Metric Spaces From Euclidean Universal Approximators

We build universal approximators of continuous maps between arbitrary Po...
07/30/2020

Random Vector Functional Link Networks for Function Approximation on Manifolds

The learning speed of feed-forward neural networks is notoriously slow a...
10/07/2021

Universal Approximation Under Constraints is Possible with Transformers

Many practical problems need the output of a machine learning model to s...