A Stochastic Trust Region Algorithm

12/29/2017
by   Frank E. Curtis, et al.
0

An algorithm is proposed for solving stochastic and finite sum minimization problems. Based on a trust region methodology, the algorithm employs normalized steps, at least as long as the norms of the stochastic gradient estimates are within a user-defined interval. The complete algorithm---which dynamically chooses whether or not to employ normalized steps---is proved to have convergence guarantees that are similar to those possessed by a traditional stochastic gradient approach under various sets of conditions related to the accuracy of the stochastic gradient estimates and choice of stepsize sequence. The results of numerical experiments are presented when the method is employed to minimize convex and nonconvex machine learning test problems, illustrating that the method can outperform a traditional stochastic gradient approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/28/2023

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

A stochastic-gradient-based interior-point algorithm for minimizing a co...
03/01/2017

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

In this paper, we propose a StochAstic Recursive grAdient algoritHm (SAR...
07/07/2021

A stochastic first-order trust-region method with inexact restoration for finite-sum minimization

We propose a stochastic first-order trust-region method with inexact fun...
08/07/2023

Almost-sure convergence of iterates and multipliers in stochastic sequential quadratic optimization

Stochastic sequential quadratic optimization (SQP) methods for solving c...
04/22/2021

A Dimension-Insensitive Algorithm for Stochastic Zeroth-Order Optimization

This paper concerns a convex, stochastic zeroth-order optimization (S-ZO...

Please sign up or login with your details

Forgot password? Click here to reset