From Kernel Methods to Neural Networks: A Unifying Variational Formulation

06/29/2022
by   Michael Unser, et al.
0

The minimization of a data-fidelity term and an additive regularization functional gives rise to a powerful framework for supervised learning. In this paper, we present a unifying regularization functional that depends on an operator and on a generic Radon-domain norm. We establish the existence of a minimizer and give the parametric form of the solution(s) under very mild assumptions. When the norm is Hilbertian, the proposed formulation yields a solution that involves radial-basis functions and is compatible with the classical methods of machine learning. By contrast, for the total-variation norm, the solution takes the form of a two-layer neural network with an activation function that is determined by the regularization operator. In particular, we retrieve the popular ReLU networks by letting the operator be the Laplacian. We also characterize the solution for the intermediate regularization norms ·=·_L_p with p∈(1,2]. Our framework offers guarantees of universal approximation for a broad family of regularization operators or, equivalently, for a wide variety of shallow neural networks, including the cases (such as ReLU) where the activation function is increasing polynomially. It also explains the favorable role of bias and skip connections in neural architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/10/2023

Approximation of Nonlinear Functionals Using Deep ReLU Networks

In recent years, functional neural networks have been proposed and studi...
research
06/28/2021

Sharp Lower Bounds on the Approximation Rate of Shallow Neural Networks

We consider the approximation rates of shallow neural networks with resp...
research
04/20/2018

Understanding Regularization to Visualize Convolutional Neural Networks

Variational methods for revealing visual concepts learned by convolution...
research
06/18/2020

On Sparsity in Overparametrised Shallow ReLU Networks

The analysis of neural network training beyond their linearization regim...
research
06/29/2023

A Quantitative Functional Central Limit Theorem for Shallow Neural Networks

We prove a Quantitative Functional Central Limit Theorem for one-hidden-...
research
05/07/2021

What Kinds of Functions do Deep Neural Networks Learn? Insights from Variational Spline Theory

We develop a variational framework to understand the properties of funct...
research
03/02/2019

A unifying representer theorem for inverse problems and machine learning

The standard approach for dealing with the ill-posedness of the training...

Please sign up or login with your details

Forgot password? Click here to reset