Sub-linear convergence of a stochastic proximal iteration method in Hilbert space

10/23/2020
by   Monika Eisenmann, et al.
0

We consider a stochastic version of the proximal point algorithm for optimization problems posed on a Hilbert space. A typical application of this is supervised learning. While the method is not new, it has not been extensively analyzed in this form. Indeed, most related results are confined to the finite-dimensional setting, where error bounds could depend on the dimension of the space. On the other hand, the few existing results in the infinite-dimensional setting only prove very weak types of convergence, owing to weak assumptions on the problem. In particular, there are no results that show convergence with a rate. In this article, we bridge these two worlds by assuming more regularity of the optimization problem, which allows us to prove convergence with an (optimal) sub-linear rate also in an infinite-dimensional setting. We illustrate these results by discretizing a concrete infinite-dimensional classification problem with varying degrees of accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2022

The Kolmogorov Infinite Dimensional Equation in a Hilbert space Via Deep Learning Methods

We consider the nonlinear Kolmogorov equation posed in a Hilbert space H...
research
08/27/2019

Convergence of the conjugate gradient method with unbounded operators

In the framework of inverse linear problems on infinite-dimensional Hilb...
research
01/22/2019

On convergence rate of stochastic proximal point algorithm without strong convexity, smoothness or bounded gradients

Significant parts of the recent learning literature on stochastic optimi...
research
10/23/2022

Stochastic Mirror Descent for Large-Scale Sparse Recovery

In this paper we discuss an application of Stochastic Approximation to s...
research
02/04/2022

Polynomial convergence of iterations of certain random operators in Hilbert space

We study the convergence of random iterative sequence of a family of ope...
research
08/11/2023

The Stochastic Steepest Descent Method for Robust Optimization in Banach Spaces

Stochastic gradient methods have been a popular and powerful choice of o...
research
05/11/2020

Inexact and Stochastic Generalized Conditional Gradient with Augmented Lagrangian and Proximal Step

In this paper we propose and analyze inexact and stochastic versions of ...

Please sign up or login with your details

Forgot password? Click here to reset