DeepAI AI Chat
Log In Sign Up

Approximation capability of neural networks on spaces of probability measures and tree-structured domains

06/03/2019
by   Tomáš Pevný, et al.
0

This paper extends the proof of density of neural networks in the space of continuous (or even measurable) functions on Euclidean spaces to functions on compact sets of probability measures. By doing so the work parallels a more then a decade old results on mean-map embedding of probability measures in reproducing kernel Hilbert spaces. The work has wide practical consequences for multi-instance learning, where it theoretically justifies some recently proposed constructions. The result is then extended to Cartesian products, yielding universal approximation theorem for tree-structured domains, which naturally occur in data-exchange formats like JSON, XML, YAML, AVRO, and ProtoBuffer. This has important practical implications, as it enables to automatically create an architecture of neural networks for processing structured data (AutoML paradigms), as demonstrated by an accompanied library for JSON format.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/08/2019

Universal Approximation Theorems

The universal approximation theorem established the density of specific ...
05/02/2021

A structured proof of the Kolmogorov superposition theorem

We present a well-structured detailed exposition of a well-known proof o...
06/16/2020

Metrizing Weak Convergence with Maximum Mean Discrepancies

Theorem 12 of Simon-Gabriel Schölkopf (JMLR, 2018) seemed to close a...
06/03/2020

Non-Euclidean Universal Approximation

Modifications to a neural network's input and output layers are often re...
07/05/2020

Overlaying Spaces and Practical Applicability of Complex Geometries

Recently, non-Euclidean spaces became popular for embedding structured d...