Invariant Lipschitz Bandits: A Side Observation Approach

12/14/2022
by   Nam Phuong Tran, et al.
0

Symmetry arises in many optimization and decision-making problems, and has attracted considerable attention from the optimization community: By utilizing the existence of such symmetries, the process of searching for optimal solutions can be improved significantly. Despite its success in (offline) optimization, the utilization of symmetries has not been well examined within the online optimization settings, especially in the bandit literature. As such, in this paper we study the invariant Lipschitz bandit setting, a subclass of the Lipschitz bandits where the reward function and the set of arms are preserved under a group of transformations. We introduce an algorithm named , which naturally integrates side observations using group orbits into the algorithm (<cit.>), which uniformly discretizes the set of arms. Using the side-observation approach, we prove an improved regret upper bound, which depends on the cardinality of the group, given that the group is finite. We also prove a matching regret's lower bound for the invariant Lipschitz bandit class (up to logarithmic factors). We hope that our work will ignite further investigation of symmetry in bandit theory and sequential decision-making theory in general.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2021

Batched Lipschitz Bandits

In this paper, we study the batched Lipschitz bandit problem, where the ...
research
12/11/2020

Smooth Bandit Optimization: Generalization to Hölder Space

We consider bandit optimization of a smooth reward function, where the g...
research
03/06/2023

Lower Bounds for γ-Regret via the Decision-Estimation Coefficient

In this note, we give a new lower bound for the γ-regret in bandit probl...
research
01/08/2020

On Thompson Sampling for Smoother-than-Lipschitz Bandits

Thompson Sampling is a well established approach to bandit and reinforce...
research
02/12/2021

The Symmetry between Arms and Knapsacks: A Primal-Dual Approach for Bandits with Knapsacks

In this paper, we study the bandits with knapsacks (BwK) problem and dev...
research
05/19/2023

From Random Search to Bandit Learning in Metric Measure Spaces

Random Search is one of the most widely-used method for Hyperparameter O...
research
12/12/2022

Autoregressive Bandits

Autoregressive processes naturally arise in a large variety of real-worl...

Please sign up or login with your details

Forgot password? Click here to reset