Almost-sure convergence of iterates and multipliers in stochastic sequential quadratic optimization

08/07/2023
by   Frank E. Curtis, et al.
0

Stochastic sequential quadratic optimization (SQP) methods for solving continuous optimization problems with nonlinear equality constraints have attracted attention recently, such as for solving large-scale data-fitting problems subject to nonconvex constraints. However, for a recently proposed subclass of such methods that is built on the popular stochastic-gradient methodology from the unconstrained setting, convergence guarantees have been limited to the asymptotic convergence of the expected value of a stationarity measure to zero. This is in contrast to the unconstrained setting in which almost-sure convergence guarantees (of the gradient of the objective to zero) can be proved for stochastic-gradient-based methods. In this paper, new almost-sure convergence guarantees for the primal iterates, Lagrange multipliers, and stationarity measures generated by a stochastic SQP algorithm in this subclass of methods are proved. It is shown that the error in the Lagrange multipliers can be bounded by the distance of the primal iterate to a primal stationary point plus the error in the latest stochastic gradient estimate. It is further shown that, subject to certain assumptions, this latter error can be made to vanish by employing a running average of the Lagrange multipliers that are computed during the run of the algorithm. The results of numerical experiments are provided to demonstrate the proved theoretical guarantees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2023

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

A stochastic-gradient-based interior-point algorithm for minimizing a co...
research
05/25/2016

NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization

We study a stochastic and distributed algorithm for nonconvex problems w...
research
06/30/2021

Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity

Two of the most prominent algorithms for solving unconstrained smooth ga...
research
12/29/2017

A Stochastic Trust Region Algorithm

An algorithm is proposed for solving stochastic and finite sum minimizat...
research
07/19/2021

Revisiting the Primal-Dual Method of Multipliers for Optimisation over Centralised Networks

The primal-dual method of multipliers (PDMM) was originally designed for...
research
08/02/2023

A Universal Birkhoff Theory for Fast Trajectory Optimization

Over the last two decades, pseudospectral methods based on Lagrange inte...
research
07/07/2014

The Primal-Dual Hybrid Gradient Method for Semiconvex Splittings

This paper deals with the analysis of a recent reformulation of the prim...

Please sign up or login with your details

Forgot password? Click here to reset