Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

10/29/2019
by   Raghu Bollapragada, et al.
0

We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a common random number framework. We employ modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the stochastic approximations. We provide preliminary numerical experiments to illustrate potential performance benefits of the proposed method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset