An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

07/31/2015
by   Ohad Shamir, et al.
0

We consider the closely related problems of bandit convex optimization with two-point feedback, and zero-order stochastic convex optimization with two function evaluations per round. We provide a simple algorithm and analysis which is optimal for convex Lipschitz functions. This improves on dujww13, which only provides an optimal result for smooth functions; Moreover, the algorithm and analysis are simpler, and readily extend to non-Euclidean problems. The algorithm is based on a small but surprisingly powerful modification of the gradient estimator.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2017

Regret Analysis for Continuous Dueling Bandit

The dueling bandit is a learning framework wherein the feedback informat...
research
07/16/2020

Comparator-adaptive Convex Bandits

We study bandit convex optimization methods that adapt to the norm of th...
research
05/27/2022

A gradient estimator via L1-randomization for online zero-order optimization with two point feedback

This work studies online zero-order optimization of convex and Lipschitz...
research
09/22/2016

(Bandit) Convex Optimization with Biased Noisy Gradient Oracles

Algorithms for bandit convex optimization and online learning often rely...
research
02/22/2017

Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe

We consider the problem of bandit optimization, inspired by stochastic o...
research
06/30/2023

Convex Optimization in Legged Robots

Convex optimization is crucial in controlling legged robots, where stabi...
research
02/11/2014

On Zeroth-Order Stochastic Convex Optimization via Random Walks

We propose a method for zeroth order stochastic convex optimization that...

Please sign up or login with your details

Forgot password? Click here to reset