Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization

by   Raghu Bollapragada, et al.

We consider unconstrained stochastic optimization problems with no available gradient information. Such problems arise in settings from derivative-free simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a common random number framework. We develop modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the stochastic approximations and provide global convergence results to the neighborhood of the optimal solution. We present numerical experiments on simulation optimization problems to illustrate the performance of the proposed algorithm. When compared with classical zeroth-order stochastic gradient methods, we observe that our strategies of adapting the sample sizes significantly improve performance in terms of the number of stochastic function evaluations required.



There are no comments yet.


page 1

page 2

page 3

page 4


Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

We consider stochastic zero-order optimization problems, which arise in ...

Learning the Step-size Policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Algorithm

We consider the problem of how to learn a step-size policy for the Limit...

On the equivalence of different adaptive batch size selection strategies for stochastic gradient descent methods

In this study, we demonstrate that the norm test and inner product/ortho...

Retrospective Approximation for Smooth Stochastic Optimization

We consider stochastic optimization problems where a smooth (and potenti...

Adaptive Sampling Strategies for Stochastic Optimization

In this paper, we propose a stochastic optimization method that adaptive...

Stochastic quasi-Newton with adaptive step lengths for large-scale problems

We provide a numerically robust and fast method capable of exploiting th...

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex sto...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.