Universal Approximation Theorems

10/08/2019
by   Anastasis Kratsios, et al.
0

The universal approximation theorem established the density of specific families of neural networks in the space of continuous functions and in certain Bochner spaces, defined between any two Euclidean spaces. We extend and refine this result by proving that there exist dense neural network architectures on a larger class of function spaces and that these architectures may be written down using only a small number of functions. We prove that upon appropriately randomly selecting the neural networks architecture's activation function we may still obtain a dense set of neural networks, with positive probability. This result is used to overcome the difficulty of appropriately selecting an activation function in more exotic architectures. Conversely, we show that given any neural network architecture on a set of continuous functions between two T0 topological spaces, there exists a unique finest topology on that set of functions which makes the neural network architecture into a universal approximator. Several examples are considered throughout the paper.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2019

On the Approximation Properties of Neural Networks

We prove two new results concerning the approximation properties of neur...
research
05/26/2023

Universal Approximation and the Topological Neural Network

A topological neural network (TNN), which takes data from a Tychonoff to...
research
12/18/2020

Universal Approximation in Dropout Neural Networks

We prove two universal approximation theorems for a range of dropout neu...
research
06/03/2019

Approximation capability of neural networks on spaces of probability measures and tree-structured domains

This paper extends the proof of density of neural networks in the space ...
research
02/19/2021

Principled Simplicial Neural Networks for Trajectory Prediction

We consider the construction of neural network architectures for data on...
research
09/30/2018

Deep, Skinny Neural Networks are not Universal Approximators

In order to choose a neural network architecture that will be effective ...
research
12/21/2018

On the Relative Expressiveness of Bayesian and Neural Networks

A neural network computes a function. A central property of neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset