A Second-Order Method for Stochastic Bandit Convex Optimisation

02/10/2023
by   Tor Lattimore, et al.
0

We introduce a simple and efficient algorithm for unconstrained zeroth-order stochastic convex bandits and prove its regret is at most (1 + r/d)[d^1.5√(n) + d^3] polylog(n, d, r) where n is the horizon, d the dimension and r is the radius of a known ball containing the minimiser of the loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2020

Improved Regret for Zeroth-Order Adversarial Bandit Convex Optimisation

We prove that the information-theoretic upper bound on the minimax regre...
research
02/01/2023

Bandit Convex Optimisation Revisited: FTRL Achieves Õ(t^1/2) Regret

We show that a kernel estimator using multiple function evaluations can ...
research
03/10/2021

Linear Bandits on Uniformly Convex Sets

Linear bandit algorithms yield 𝒪̃(n√(T)) pseudo-regret bounds on compact...
research
02/10/2021

An Efficient Pessimistic-Optimistic Algorithm for Constrained Linear Bandits

This paper considers stochastic linear bandits with general constraints....
research
02/12/2022

Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits

We consider the problem of combining and learning over a set of adversar...
research
06/01/2021

Minimax Regret for Bandit Convex Optimisation of Ridge Functions

We analyse adversarial bandit convex optimisation with an adversary that...
research
09/14/2016

Stochastic Heavy Ball

This paper deals with a natural stochastic optimization procedure derive...

Please sign up or login with your details

Forgot password? Click here to reset