Understanding neural networks with reproducing kernel Banach spaces

09/20/2021
by   Francesca Bartolucci, et al.
0

Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties. In this paper we discuss how the theory of reproducing kernel Banach spaces can be used to tackle this challenge. In particular, we prove a representer theorem for a wide class of reproducing kernel Banach spaces that admit a suitable integral representation and include one hidden layer neural networks of possibly infinite width. Further, we show that, for a suitable class of ReLU activation functions, the norm in the corresponding reproducing kernel Banach space can be characterized in terms of the inverse Radon transform of a bounded real measure, with norm given by the total variation norm of the measure. Our analysis simplifies and extends recent results in [34,29,30].

READ FULL TEXT

page 1

page 2

page 3

page 4

01/23/2011

Reproducing Kernel Banach Spaces with the l1 Norm

Targeting at sparse learning, we construct Banach spaces B of functions ...
09/13/2021

Uniform Generalization Bounds for Overparameterized Neural Networks

An interesting observation in artificial neural networks is their favora...
06/02/2021

Transformers are Deep Infinite-Dimensional Non-Mercer Binary Kernel Machines

Despite their ubiquity in core AI fields like natural language processin...
06/09/2017

Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations

In this paper, we study deep signal representations that are invariant t...
06/10/2020

Neural Networks, Ridge Splines, and TV Regularization in the Radon Domain

We develop a variational framework to understand the properties of the f...
04/24/2019

Native Banach spaces for splines and variational inverse problems

We propose a systematic construction of native Banach spaces for general...