A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

04/28/2023
by   Frank E. Curtis, et al.
0

A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results. The algorithm is unique from other interior-point methods for solving smooth (nonconvex) optimization problems since the search directions are computed using stochastic gradient estimates. It is also unique in its use of inner neighborhoods of the feasible region – defined by a positive and vanishing neighborhood-parameter sequence – in which the iterates are forced to remain. It is shown that with a careful balance between the barrier, step-size, and neighborhood sequences, the proposed algorithm satisfies convergence guarantees in both deterministic and stochastic settings. The results of numerical experiments show that in both settings the algorithm can outperform a projected-(stochastic)-gradient method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2017

A Stochastic Trust Region Algorithm

An algorithm is proposed for solving stochastic and finite sum minimizat...
research
08/07/2023

Almost-sure convergence of iterates and multipliers in stochastic sequential quadratic optimization

Stochastic sequential quadratic optimization (SQP) methods for solving c...
research
06/18/2020

An adaptive stochastic gradient-free approach for high-dimensional blackbox optimization

In this work, we propose a novel adaptive stochastic gradient-free (ASGF...
research
05/15/2019

Hybrid Stochastic Gradient Descent Algorithms for Stochastic Nonconvex Optimization

We introduce a hybrid stochastic estimator to design stochastic gradient...
research
12/31/2019

Stochastic gradient-free descents

In this paper we propose stochastic gradient-free methods and gradient-f...
research
09/06/2018

Escaping Saddle Points in Constrained Optimization

In this paper, we focus on escaping from saddle points in smooth nonconv...
research
02/10/2021

An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented Lagrangians

We consider the problem of solving nonlinear optimization programs with ...

Please sign up or login with your details

Forgot password? Click here to reset