Non-Euclidean Universal Approximation

06/03/2020
by   Anastasis Kratsios, et al.
0

Modifications to a neural network's input and output layers are often required to accommodate the specificities of most practical learning tasks. However, the impact of such changes on architecture's approximation capabilities is largely not understood. We present general conditions describing feature and readout maps that preserve an architecture's ability to approximate any continuous functions uniformly on compacts. As an application, we show that if an architecture is capable of universal approximation, then modifying its final layer to produce binary values creates a new architecture capable of deterministically approximating any classifier. In particular, we obtain guarantees for deep CNNs, deep ffNN, and universal Gaussian processes. Our results also have consequences within the scope of geometric deep learning. Specifically, when the input and output spaces are Hadamard manifolds, we obtain geometrically meaningful feature and readout maps satisfying our criteria. Consequently, commonly used non-Euclidean regression models between spaces of symmetric positive definite matrices are extended to universal DNNs. The same result allows us to show that the hyperbolic feed-forward networks, used for hierarchical learning, are universal. Our result is also used to show that the common practice of randomizing all but the last two layers of a DNN produces a universal family of functions with probability one.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2021

Quantitative Rates and Fundamental Obstructions to Non-Euclidean Universal Approximation with Deep Narrow Feed-Forward Networks

By incorporating structured pairs of non-trainable input and output laye...
research
04/24/2023

A Transfer Principle: Universal Approximators Between Metric Spaces From Euclidean Universal Approximators

We build universal approximators of continuous maps between arbitrary Po...
research
06/05/2023

Global universal approximation of functional input maps on weighted spaces

We introduce so-called functional input neural networks defined on a pos...
research
05/17/2021

Universal Regular Conditional Distributions

We introduce a general framework for approximating regular conditional d...
research
07/30/2020

Random Vector Functional Link Networks for Function Approximation on Manifolds

The learning speed of feed-forward neural networks is notoriously slow a...
research
09/12/2023

Interpolation, Approximation and Controllability of Deep Neural Networks

We investigate the expressive power of deep residual neural networks ide...
research
03/28/2018

Intertwiners between Induced Representations (with Applications to the Theory of Equivariant Neural Networks)

Group equivariant and steerable convolutional neural networks (regular a...

Please sign up or login with your details

Forgot password? Click here to reset