A Note on Zeroth-Order Optimization on the Simplex

08/02/2022
by   Tijana Zrnic, et al.
0

We construct a zeroth-order gradient estimator for a smooth function defined on the probability simplex. The proposed estimator queries the simplex only. We prove that projected gradient descent and the exponential weights algorithm, when run with this estimator instead of exact gradients, converge at a 𝒪(T^-1/4) rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/13/2017

Potential-Function Proofs for First-Order Methods

This note discusses proofs for convergence of first-order methods based ...
research
08/13/2023

Estimator Meets Equilibrium Perspective: A Rectified Straight Through Estimator for Binary Neural Networks Training

Binarization of neural networks is a dominant paradigm in neural network...
research
06/29/2020

Natural Gradient for Combined Loss Using Wavelets

Natural gradients have been widely used in optimization of loss function...
research
12/28/2022

Robustifying Markowitz

Markowitz mean-variance portfolios with sample mean and covariance as in...
research
05/31/2020

Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs

We study estimation of a gradient-sparse parameter vector θ^* ∈ℝ^p, havi...
research
09/08/2022

Stochastic gradient descent with gradient estimator for categorical features

Categorical data are present in key areas such as health or supply chain...
research
02/02/2022

Do Differentiable Simulators Give Better Policy Gradients?

Differentiable simulators promise faster computation time for reinforcem...

Please sign up or login with your details

Forgot password? Click here to reset