Understanding neural networks with reproducing kernel Banach spaces

by   Francesca Bartolucci, et al.

Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties. In this paper we discuss how the theory of reproducing kernel Banach spaces can be used to tackle this challenge. In particular, we prove a representer theorem for a wide class of reproducing kernel Banach spaces that admit a suitable integral representation and include one hidden layer neural networks of possibly infinite width. Further, we show that, for a suitable class of ReLU activation functions, the norm in the corresponding reproducing kernel Banach space can be characterized in terms of the inverse Radon transform of a bounded real measure, with norm given by the total variation norm of the measure. Our analysis simplifies and extends recent results in [34,29,30].


page 1

page 2

page 3

page 4


Reproducing Kernel Banach Spaces with the l1 Norm

Targeting at sparse learning, we construct Banach spaces B of functions ...

Uniform Generalization Bounds for Overparameterized Neural Networks

An interesting observation in artificial neural networks is their favora...

Transformers are Deep Infinite-Dimensional Non-Mercer Binary Kernel Machines

Despite their ubiquity in core AI fields like natural language processin...

Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations

In this paper, we study deep signal representations that are invariant t...

Neural Networks, Ridge Splines, and TV Regularization in the Radon Domain

We develop a variational framework to understand the properties of the f...

Native Banach spaces for splines and variational inverse problems

We propose a systematic construction of native Banach spaces for general...