Variance Reduction for Sequential Sampling in Stochastic Programming

05/05/2020
by   Jangho Park, et al.
0

This paper investigates the variance reduction techniques Antithetic Variates (AV) and Latin Hypercube Sampling (LHS) when used for sequential sampling in stochastic programming and presents a comparative computational study. It shows conditions under which the sequential sampling with AV and LHS satisfy finite stopping guarantees and are asymptotically valid, discussing LHS in detail. It computationally compares their use in both the sequential and non-sequential settings through a collection of two-stage stochastic linear programs with different characteristics. The numerical results show that while both AV and LHS can be preferable to random sampling in either setting, LHS typically dominates in the non-sequential setting while performing well sequentially and AV gains some advantages in the sequential setting. These results imply that, given the ease of implementation of these variance reduction techniques, armed with the same theoretical properties and improved empirical performance relative to random sampling, AV and LHS sequential procedures present attractive alternatives in practice for a class of stochastic programs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2018

Online Variance Reduction for Stochastic Optimization

Modern stochastic optimization methods often rely on uniform sampling wh...
research
05/22/2023

Sequential Estimation using Hierarchically Stratified Domains with Latin Hypercube Sampling

Quantifying the effect of uncertainties in systems where only point eval...
research
07/15/2022

A Broad and General Sequential Sampling Scheme

In this paper, we propose a broad and general sequential sampling scheme...
research
12/22/2019

AVaN Pack: An Analytical/Numerical Solution for Variance-Based Sensitivity Analysis

Sensitivity analysis is an important concept to analyze the influences o...
research
03/29/2019

Online Variance Reduction with Mixtures

Adaptive importance sampling for stochastic optimization is a promising ...
research
06/10/2018

Dissipativity Theory for Accelerating Stochastic Variance Reduction: A Unified Analysis of SVRG and Katyusha Using Semidefinite Programs

Techniques for reducing the variance of gradient estimates used in stoch...
research
06/04/2018

Sequential Test for the Lowest Mean: From Thompson to Murphy Sampling

Learning the minimum/maximum mean among a finite set of distributions is...

Please sign up or login with your details

Forgot password? Click here to reset