Adaptive Sequential SAA for Solving Two-stage Stochastic Linear Programs

12/07/2020
by   Raghu Pasupathy, et al.
0

We present adaptive sequential SAA (sample average approximation) algorithms to solve large-scale two-stage stochastic linear programs. The iterative algorithm framework we propose is organized into outer and inner iterations as follows: during each outer iteration, a sample-path problem is implicitly generated using a sample of observations or “scenarios," and solved only imprecisely, to within a tolerance that is chosen adaptively, by balancing the estimated statistical error against solution error. The solutions from prior iterations serve as warm starts to aid efficient solution of the (piecewise linear convex) sample-path optimization problems generated on subsequent iterations. The generated scenarios can be independent and identically distributed (iid), or dependent, as in Monte Carlo generation using Latin-hypercube sampling, antithetic variates, or randomized quasi-Monte Carlo. We first characterize the almost-sure convergence (and convergence in mean) of the optimality gap and the distance of the generated stochastic iterates to the true solution set. We then characterize the corresponding iteration complexity and work complexity rates as a function of the sample size schedule, demonstrating that the best achievable work complexity rate is Monte Carlo canonical and analogous to the generic 𝒪(ϵ^-2) optimal complexity for non-smooth convex optimization. We report extensive numerical tests that indicate favorable performance, due primarily to the use of a sequential framework with an optimal sample size schedule, and the use of warm starts. The proposed algorithm can be stopped in finite-time to return a solution endowed with a probabilistic guarantee on quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2021

Retrospective Approximation for Smooth Stochastic Optimization

We consider stochastic optimization problems where a smooth (and potenti...
research
07/03/2018

Finite Sample L_2 Bounds for Sequential Monte Carlo and Adaptive Path Selection

We prove a bound on the finite sample error of sequential Monte Carlo (S...
research
06/10/2021

Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach

In this paper, we study the application of quasi-Newton methods for solv...
research
11/29/2022

Penalized Langevin and Hamiltonian Monte Carlo Algorithms for Constrained Sampling

We consider the constrained sampling problem where the goal is to sample...
research
10/23/2021

Doubly Robust Stein-Kernelized Monte Carlo Estimator: Simultaneous Bias-Variance Reduction and Supercanonical Convergence

Standard Monte Carlo computation is widely known to exhibit a canonical ...
research
03/25/2021

Minimizing Nonsmooth Convex Functions with Variable Accuracy

We consider unconstrained optimization problems with nonsmooth and conve...
research
08/29/2016

On the Computational Complexity of Geometric Langevin Monte Carlo

Manifold Markov chain Monte Carlo algorithms have been introduced to sam...

Please sign up or login with your details

Forgot password? Click here to reset