Integral representation of the global minimizer

05/19/2018
by   Sho Sonoda, et al.
0

We have obtained an integral representation of the shallow neural network that attains the global minimum of its backpropagation (BP) training problem. According to our unpublished numerical simulations conducted several years prior to this study, we had noticed that such an integral representation may exist, but it was not proven until today. First, we introduced a Hilbert space of coefficient functions, and a reproducing kernel Hilbert space (RKHS) of hypotheses, associated with the integral representation. The RKHS reflects the approximation ability of neural networks. Second, we established the ridgelet analysis on RKHS. The analytic property of the integral representation is remarkably clear. Third, we reformulated the BP training as the optimization problem in the space of coefficient functions, and obtained a formal expression of the unique global minimizer, according to the Tikhonov regularization theory. Finally, we demonstrated that the global minimizer is the shrink ridgelet transform. Since the relation between an integral representation and an ordinary finite network is not clear, and BP is convex in the integral representation, we cannot immediately answer the question such as "Is a local minimum a global minimum?" However, the obtained integral representation provides an explicit expression of the global minimizer, without linearity-like assumptions, such as partial linearity and monotonicity. Furthermore, it indicates that the ordinary ridgelet transform provides the minimum norm solution to the original training equation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2020

Consistency and Regression with Laplacian regularization in Reproducing Kernel Hilbert Space

This note explains a way to look at reproducing kernel Hilbert space for...
research
06/09/2021

Solution of Wiener-Hopf and Fredholm integral equations by fast Hilbert and Fourier transforms

We present numerical methods based on the fast Fourier transform (FFT) t...
research
12/23/2013

Nonparametric Weight Initialization of Neural Networks via Integral Representation

A new initialization method for hidden parameters in a neural network is...
research
01/10/2022

A Coq Formalization of the Bochner integral

The Bochner integral is a generalization of the Lebesgue integral, for f...
research
02/02/2019

Numerical Integration Method for Training Neural Network

We propose a new numerical integration method for training a shallow neu...
research
09/01/2022

Global behavior of temporal discretizations for Volterra integrodifferential equations with certain nonsmooth kernels

In this work, the z-transform is presented to analyze two kinds of time-...
research
01/21/2019

Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits

Consider the problem: given data pair (x, y) drawn from a population wit...

Please sign up or login with your details

Forgot password? Click here to reset