A Stochastic Trust Region Algorithm

by   Frank E. Curtis, et al.

An algorithm is proposed for solving stochastic and finite sum minimization problems. Based on a trust region methodology, the algorithm employs normalized steps, at least as long as the norms of the stochastic gradient estimates are within a user-defined interval. The complete algorithm---which dynamically chooses whether or not to employ normalized steps---is proved to have convergence guarantees that are similar to those possessed by a traditional stochastic gradient approach under various sets of conditions related to the accuracy of the stochastic gradient estimates and choice of stepsize sequence. The results of numerical experiments are presented when the method is employed to minimize convex and nonconvex machine learning test problems, illustrating that the method can outperform a traditional stochastic gradient approach.


page 1

page 2

page 3

page 4


A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

A stochastic-gradient-based interior-point algorithm for minimizing a co...

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

In this paper, we propose a StochAstic Recursive grAdient algoritHm (SAR...

A stochastic first-order trust-region method with inexact restoration for finite-sum minimization

We propose a stochastic first-order trust-region method with inexact fun...

Almost-sure convergence of iterates and multipliers in stochastic sequential quadratic optimization

Stochastic sequential quadratic optimization (SQP) methods for solving c...

A Dimension-Insensitive Algorithm for Stochastic Zeroth-Order Optimization

This paper concerns a convex, stochastic zeroth-order optimization (S-ZO...

Please sign up or login with your details

Forgot password? Click here to reset