Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

10/29/2019
by   Raghu Bollapragada, et al.
0

We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a common random number framework. We employ modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the stochastic approximations. We provide preliminary numerical experiments to illustrate potential performance benefits of the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/24/2021

Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization

We consider unconstrained stochastic optimization problems with no avail...
research
02/23/2017

Stochastic Newton and Quasi-Newton Methods for Large Linear Least-squares Problems

We describe stochastic Newton and stochastic quasi-Newton approaches to ...
research
03/07/2021

Retrospective Approximation for Smooth Stochastic Optimization

We consider stochastic optimization problems where a smooth (and potenti...
research
08/17/2023

Dual Gauss-Newton Directions for Deep Learning

Inspired by Gauss-Newton-like methods, we study the benefit of leveragin...
research
12/10/2019

A Stochastic Quasi-Newton Method for Large-Scale Nonconvex Optimization with Applications

This paper proposes a novel stochastic version of damped and regularized...
research
05/06/2022

Estimation and Inference by Stochastic Optimization

In non-linear estimations, it is common to assess sampling uncertainty b...
research
10/30/2017

Adaptive Sampling Strategies for Stochastic Optimization

In this paper, we propose a stochastic optimization method that adaptive...

Please sign up or login with your details

Forgot password? Click here to reset