The Kolmogorov-Arnold representation theorem revisited

07/31/2020
by   Johannes Schmidt-Hieber, et al.
0

There is a longstanding debate whether the Kolmogorov-Arnold representation theorem can explain the use of more than one hidden layer in neural networks. The Kolmogorov-Arnold representation decomposes a multivariate function into an interior and an outer function and therefore has indeed a similar structure as a neural network with two hidden layers. But there are distinctive differences. One of the main obstacles is that the outer function depends on the represented function and can be wildly varying even if the represented function is smooth. We derive modifications of the Kolmogorov-Arnold representation that transfer smoothness properties of the represented function to the outer function and can be well approximated by ReLU networks. It appears that instead of two hidden layers, a more natural interpretation of the Kolmogorov-Arnold representation is that of a deep neural network where most of the layers are required to approximate the interior function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2020

No one-hidden-layer neural network can represent multivariable functions

In a function approximation with a neural network, an input dataset is m...
research
06/27/2019

Error bounds for deep ReLU networks using the Kolmogorov--Arnold superposition theorem

We prove a theorem concerning the approximation of multivariate continuo...
research
06/10/2020

Representation formulas and pointwise properties for Barron functions

We study the natural function space for infinitely wide two-layer neural...
research
10/28/2019

On approximating ∇ f with neural networks

Consider a feedforward neural network ψ: R^d→R^d such that ψ≈∇ f, where ...
research
02/02/2022

The Role of Linear Layers in Nonlinear Interpolating Networks

This paper explores the implicit bias of overparameterized neural networ...
research
05/24/2023

Linear Neural Network Layers Promote Learning Single- and Multiple-Index Models

This paper explores the implicit bias of overparameterized neural networ...
research
09/02/2022

Normalization effects on deep neural networks

We study the effect of normalization on the layers of deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset