Non-Asymptotic Bounds for Zeroth-Order Stochastic Optimization

02/26/2020
by   Nirav Bhavsar, et al.
0

We consider the problem of optimizing an objective function with and without convexity in a simulation-optimization context, where only stochastic zeroth-order information is available. We consider two techniques for estimating gradient/Hessian, namely simultaneous perturbation (SP) and Gaussian smoothing (GS). We introduce an optimization oracle to capture a setting where the function measurements have an estimation error that can be controlled. Our oracle is appealing in several practical contexts where the objective has to be estimated from i.i.d. samples, and increasing the number of samples reduces the estimation error. In the stochastic non-convex optimization context, we analyze the zeroth-order variant of the randomized stochastic gradient (RSG) and quasi-Newton (RSQN) algorithms with a biased gradient/Hessian oracle, and with its variant involving an estimation error component. In particular, we provide non-asymptotic bounds on the performance of both algorithms, and our results provide a guideline for choosing the batch size for estimation, so that the overall error bound matches with the one obtained when there is no estimation error. Next, in the stochastic convex optimization setting, we provide non-asymptotic bounds that hold in expectation for the last iterate of a stochastic gradient descent (SGD) algorithm, and our bound for the GS variant of SGD matches the bound for SGD with unbiased gradient information. We perform simulation experiments on synthetic as well as real-world datasets, and the empirical results validate the theoretical findings.

READ FULL TEXT
research
11/03/2020

SGB: Stochastic Gradient Bound Method for Optimizing Partition Functions

This paper addresses the problem of optimizing partition functions in a ...
research
11/16/2021

Online Estimation and Optimization of Utility-Based Shortfall Risk

Utility-Based Shortfall Risk (UBSR) is a risk metric that is increasingl...
research
04/19/2013

Optimal Stochastic Strongly Convex Optimization with a Logarithmic Number of Projections

We consider stochastic strongly convex optimization with a complex inequ...
research
10/03/2020

Practical Precoding via Asynchronous Stochastic Successive Convex Approximation

We consider stochastic optimization of a smooth non-convex loss function...
research
10/24/2022

Langevin dynamics based algorithm e-THεO POULA for stochastic optimization problems with discontinuous stochastic gradient

We introduce a new Langevin dynamics based algorithm, called e-THεO POUL...
research
10/29/2017

Stochastic Zeroth-order Optimization in High Dimensions

We consider the problem of optimizing a high-dimensional convex function...
research
07/19/2021

Non-asymptotic estimates for TUSLA algorithm for non-convex learning with applications to neural networks with ReLU activation function

We consider non-convex stochastic optimization problems where the object...

Please sign up or login with your details

Forgot password? Click here to reset