Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping

06/18/2020
by   Nicole Mücke, et al.
0

Stochastic Gradient Descent (SGD) has become the method of choice for solving a broad range of machine learning problems. However, some of its learning properties are still not fully understood. We consider least squares learning in reproducing kernel Hilbert spaces (RKHSs) and extend the classical SGD analysis to a learning setting in Hilbert scales, including Sobolev spaces and Diffusion spaces on compact Riemannian manifolds. We show that even for well-specified models, violation of a traditional benchmark smoothness assumption has a tremendous effect on the learning rate. In addition, we show that for miss-specified models, preconditioning in an appropriate Hilbert scale helps to reduce the number of iterations, i.e. allowing for "earlier stopping".

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2019

Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces

We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian mani...
research
03/16/2023

Stochastic gradient descent for linear inverse problems in variable exponent Lebesgue spaces

We consider a stochastic gradient descent (SGD) algorithm for solving li...
research
02/04/2022

Polynomial convergence of iterations of certain random operators in Hilbert space

We study the convergence of random iterative sequence of a family of ope...
research
04/30/2020

On the Discrepancy Principle for Stochastic Gradient Descent

Stochastic gradient descent (SGD) is a promising numerical method for so...
research
10/24/2020

Stochastic Gradient Descent Meets Distribution Regression

Stochastic gradient descent (SGD) provides a simple and efficient way to...
research
04/01/2020

Stopping Criteria for, and Strong Convergence of, Stochastic Gradient Descent on Bottou-Curtis-Nocedal Functions

While Stochastic Gradient Descent (SGD) is a rather efficient algorithm ...
research
03/30/2017

Diving into the shallows: a computational perspective on large-scale shallow learning

In this paper we first identify a basic limitation in gradient descent-b...

Please sign up or login with your details

Forgot password? Click here to reset