Minimax Efficient Finite-Difference Stochastic Gradient Estimators Using Black-Box Function Evaluations

07/08/2020 ∙ by Henry Lam, et al. ∙ 0

We consider stochastic gradient estimation using noisy black-box function evaluations. A standard approach is to use the finite-difference method or its variants. While natural, it is open to our knowledge whether its statistical accuracy is the best possible. This paper argues so by showing that central finite-difference is a nearly minimax optimal zeroth-order gradient estimator, among both the class of linear estimators and the much larger class of all (nonlinear) estimators.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.