-
Information-Theoretic Lower Bounds for Zero-Order Stochastic Gradient Estimation
In this paper we analyze the necessary number of samples to estimate the...
read it
-
On the Application of Danskin's Theorem to Derivative-Free Minimax Optimization
Motivated by Danskin's theorem, gradient-based methods have been applied...
read it
-
Provable Gradient Variance Guarantees for Black-Box Variational Inference
Recent variational inference methods use stochastic gradient estimators ...
read it
-
Low-variance Black-box Gradient Estimates for the Plackett-Luce Distribution
Learning models with discrete latent variables using stochastic gradient...
read it
-
Sequential Learning of Active Subspaces
In recent years, active subspace methods (ASMs) have become a popular me...
read it
-
Rao-Blackwellized Stochastic Gradients for Discrete Distributions
We wish to compute the gradient of an expectation over a finite or count...
read it
-
Enhanced Balancing of Bias-Variance Tradeoff in Stochastic Estimation: A Minimax Perspective
Biased stochastic estimators, such as finite-differences for noisy gradi...
read it
Minimax Efficient Finite-Difference Stochastic Gradient Estimators Using Black-Box Function Evaluations
We consider stochastic gradient estimation using noisy black-box function evaluations. A standard approach is to use the finite-difference method or its variants. While natural, it is open to our knowledge whether its statistical accuracy is the best possible. This paper argues so by showing that central finite-difference is a nearly minimax optimal zeroth-order gradient estimator, among both the class of linear estimators and the much larger class of all (nonlinear) estimators.
READ FULL TEXT
Comments
There are no comments yet.