DeepAI AI Chat
Log In Sign Up

Universal Approximation Theorems

by   Anastasis Kratsios, et al.

The universal approximation theorem established the density of specific families of neural networks in the space of continuous functions and in certain Bochner spaces, defined between any two Euclidean spaces. We extend and refine this result by proving that there exist dense neural network architectures on a larger class of function spaces and that these architectures may be written down using only a small number of functions. We prove that upon appropriately randomly selecting the neural networks architecture's activation function we may still obtain a dense set of neural networks, with positive probability. This result is used to overcome the difficulty of appropriately selecting an activation function in more exotic architectures. Conversely, we show that given any neural network architecture on a set of continuous functions between two T0 topological spaces, there exists a unique finest topology on that set of functions which makes the neural network architecture into a universal approximator. Several examples are considered throughout the paper.


page 1

page 2

page 3

page 4


On the Approximation Properties of Neural Networks

We prove two new results concerning the approximation properties of neur...

Universal Approximation and the Topological Neural Network

A topological neural network (TNN), which takes data from a Tychonoff to...

Universal Approximation in Dropout Neural Networks

We prove two universal approximation theorems for a range of dropout neu...

Approximation capability of neural networks on spaces of probability measures and tree-structured domains

This paper extends the proof of density of neural networks in the space ...

Principled Simplicial Neural Networks for Trajectory Prediction

We consider the construction of neural network architectures for data on...

Deep, Skinny Neural Networks are not Universal Approximators

In order to choose a neural network architecture that will be effective ...

On the Relative Expressiveness of Bayesian and Neural Networks

A neural network computes a function. A central property of neural netwo...