The Stochastic Steepest Descent Method for Robust Optimization in Banach Spaces

08/11/2023
by   Neil K. Chada, et al.
0

Stochastic gradient methods have been a popular and powerful choice of optimization methods, aimed at minimizing functions. Their advantage lies in the fact that that one approximates the gradient as opposed to using the full Jacobian matrix. One research direction, related to this, has been on the application to infinite-dimensional problems, where one may naturally have a Hilbert space framework. However, there has been limited work done on considering this in a more general setup, such as where the natural framework is that of a Banach space. This article aims to address this by the introduction of a novel stochastic method, the stochastic steepest descent method (SSD). The SSD will follow the spirit of stochastic gradient descent, which utilizes Riesz representation to identify gradients and derivatives. Our choice for using such a method is that it naturally allows one to adopt a Banach space setting, for which recent applications have exploited the benefit of this, such as in PDE-constrained shape optimization. We provide a convergence theory related to this under mild assumptions. Furthermore, we demonstrate the performance of this method on a couple of numerical applications, namely a p-Laplacian and an optimal control problem. Our assumptions are verified in these applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2020

Uncertainty Quantification in Deep Learning through Stochastic Maximum Principle

We develop a probabilistic machine learning method, which formulates a c...
research
06/17/2021

Sub-linear convergence of a tamed stochastic gradient descent method in Hilbert space

In this paper, we introduce the tamed stochastic gradient descent method...
research
05/21/2018

Stochastic modified equations for the asynchronous stochastic gradient descent

We propose a stochastic modified equations (SME) for modeling the asynch...
research
06/26/2018

A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates

We provide tight finite-time convergence bounds for gradient descent and...
research
07/19/2018

A unified theory of adaptive stochastic gradient descent as Bayesian filtering

There are a diverse array of schemes for adaptive stochastic gradient de...
research
02/04/2022

Polynomial convergence of iterations of certain random operators in Hilbert space

We study the convergence of random iterative sequence of a family of ope...
research
10/23/2020

Sub-linear convergence of a stochastic proximal iteration method in Hilbert space

We consider a stochastic version of the proximal point algorithm for opt...

Please sign up or login with your details

Forgot password? Click here to reset