Retrospective Approximation for Smooth Stochastic Optimization

03/07/2021
by   David Newton, et al.
0

We consider stochastic optimization problems where a smooth (and potentially nonconvex) objective is to be minimized using a stochastic first-order oracle. These type of problems arise in many settings from simulation optimization to deep learning. We present Retrospective Approximation (RA) as a universal sequential sample-average approximation (SAA) paradigm where during each iteration k, a sample-path approximation problem is implicitly generated using an adapted sample size M_k, and solved (with prior solutions as "warm start") to an adapted error tolerance ϵ_k, using a "deterministic method" such as the line search quasi-Newton method. The principal advantage of RA is that decouples optimization from stochastic approximation, allowing the direct adoption of existing deterministic algorithms without modification, thus mitigating the need to redesign algorithms for the stochastic context. A second advantage is the obvious manner in which RA lends itself to parallelization. We identify conditions on {M_k, k ≥ 1} and {ϵ_k, k≥ 1} that ensure almost sure convergence and convergence in L_1-norm, along with optimal iteration and work complexity rates. We illustrate the performance of RA with line-search quasi-Newton on an ill-conditioned least squares problem, as well as an image classification problem using a deep convolutional neural net.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2016

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex sto...
research
04/13/2023

Sample Average Approximation for Black-Box VI

We present a novel approach for black-box VI that bypasses the difficult...
research
12/07/2020

Adaptive Sequential SAA for Solving Two-stage Stochastic Linear Programs

We present adaptive sequential SAA (sample average approximation) algori...
research
09/24/2021

Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization

We consider unconstrained stochastic optimization problems with no avail...
research
10/29/2019

Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

We consider stochastic zero-order optimization problems, which arise in ...
research
05/06/2022

Estimation and Inference by Stochastic Optimization

In non-linear estimations, it is common to assess sampling uncertainty b...
research
04/13/2011

Hybrid Deterministic-Stochastic Methods for Data Fitting

Many structured data-fitting applications require the solution of an opt...

Please sign up or login with your details

Forgot password? Click here to reset