A function space analysis of finite neural networks with insights from sampling theory

04/15/2020
by   Raja Giryes, et al.
0

This work suggests using sampling theory to analyze the function space represented by neural networks. First, it shows, under the assumption of a finite input domain, which is the common case in training neural networks, that the function space generated by multi-layer networks with non-expansive activation functions is smooth. This extends over previous works that show results for the case of infinite width ReLU networks. Then, under the assumption that the input is band-limited, we provide novel error bounds for univariate neural networks. We analyze both deterministic uniform and random sampling showing the advantage of the former.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2023

Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space

The characterization of the functions spaces explored by neural networks...
research
07/30/2020

On the Banach spaces associated with multi-layer ReLU networks: Function representation, approximation theory and gradient descent dynamics

We develop Banach spaces for ReLU neural networks of finite depth L and ...
research
06/04/2021

Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes

This work studies the (non)robustness of two-layer neural networks in va...
research
06/10/2020

Representation formulas and pointwise properties for Barron functions

We study the natural function space for infinitely wide two-layer neural...
research
02/15/2019

Asymptotic Finite Sample Information Losses in Neural Classifiers

This paper considers the subject of information losses arising from fini...
research
08/03/2023

Memory capacity of two layer neural networks with smooth activations

Determining the memory capacity of two-layer neural networks with m hidd...

Please sign up or login with your details

Forgot password? Click here to reset