Online Statistical Inference for Gradient-free Stochastic Optimization

02/05/2021
by   Xi Chen, et al.
0

As gradient-free stochastic optimization gains emerging attention for a wide range of applications recently, the demand for uncertainty quantification of parameters obtained from such approaches arises. In this paper, we investigate the problem of statistical inference for model parameters based on gradient-free stochastic optimization methods that use only function values rather than gradients. We first present central limit theorem results for Polyak-Ruppert-averaging type gradient-free estimators. The asymptotic distribution reflects the trade-off between the rate of convergence and function query complexity. We next construct valid confidence intervals for model parameters through the estimation of the covariance matrix in a fully online fashion. We further give a general gradient-free framework for covariance estimation and analyze the role of function query complexity in the convergence rate of the covariance estimator. This provides a one-pass computationally efficient procedure for simultaneously obtaining an estimator of model parameters and conducting statistical inference. Finally, we provide numerical experiments to verify our theoretical results and illustrate some extensions of our method for various machine learning and deep learning applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2021

Statistical Inference for Polyak-Ruppert Averaged Zeroth-order Stochastic Gradient Algorithm

As machine learning models are deployed in critical applications, it bec...
research
12/20/2017

Statistical Inference for the Population Landscape via Moment Adjusted Stochastic Gradients

Modern statistical inference tasks often require iterative optimization ...
research
05/27/2022

Asymptotic Convergence Rate and Statistical Inference for Stochastic Sequential Quadratic Programming

We apply a stochastic sequential quadratic programming (StoSQP) algorith...
research
06/09/2023

A Central Limit Theorem for Stochastic Saddle Point Optimization

In this work, we study the Uncertainty Quantification (UQ) of an algorit...
research
06/25/2022

Statistical inference with implicit SGD: proximal Robbins-Monro vs. Polyak-Ruppert

The implicit stochastic gradient descent (ISGD), a proximal version of S...
research
10/13/2022

Adaptive A/B Tests and Simultaneous Treatment Parameter Optimization

Constructing asymptotically valid confidence intervals through a valid c...
research
12/02/2022

Covariance Estimators for the ROOT-SGD Algorithm in Online Learning

Online learning naturally arises in many statistical and machine learnin...

Please sign up or login with your details

Forgot password? Click here to reset