Asymptotically Optimal Bandits under Weighted Information

05/28/2021
by   Matias I. Müller, et al.
0

We study the problem of regret minimization in a multi-armed bandit setup where the agent is allowed to play multiple arms at each round by spreading the resources usually allocated to only one arm. At each iteration the agent selects a normalized power profile and receives a Gaussian vector as outcome, where the unknown variance of each sample is inversely proportional to the power allocated to that arm. The reward corresponds to a linear combination of the power profile and the outcomes, resembling a linear bandit. By spreading the power, the agent can choose to collect information much faster than in a traditional multi-armed bandit at the price of reducing the accuracy of the samples. This setup is fundamentally different from that of a linear bandit – the regret is known to scale as Θ(√(T)) for linear bandits, while in this setup the agent receives a much more detailed feedback, for which we derive a tight log(T) problem-dependent lower-bound. We propose a Thompson-Sampling-based strategy, called Weighted Thompson Sampling (), that designs the power profile as its posterior belief of each arm being the best arm, and show that its upper bound matches the derived logarithmic lower bound. Finally, we apply this strategy to a problem of control and system identification, where the goal is to estimate the maximum gain (also called ℋ_∞-norm) of a linear dynamical system based on batches of input-output samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2023

Multi-Fidelity Multi-Armed Bandits Revisited

We study the multi-fidelity multi-armed bandit (MF-MAB), an extension of...
research
11/02/2020

Multi-Armed Bandits with Censored Consumption of Resources

We consider a resource-aware variant of the classical multi-armed bandit...
research
10/15/2020

Stochastic Bandits with Vector Losses: Minimizing ℓ^∞-Norm of Relative Losses

Multi-armed bandits are widely applied in scenarios like recommender sys...
research
01/23/2023

Congested Bandits: Optimal Routing via Short-term Resets

For traffic routing platforms, the choice of which route to recommend to...
research
06/11/2020

TS-UCB: Improving on Thompson Sampling With Little to No Additional Computation

Thompson sampling has become a ubiquitous approach to online decision pr...
research
02/02/2023

Learning with Exposure Constraints in Recommendation Systems

Recommendation systems are dynamic economic systems that balance the nee...
research
06/22/2023

Pure Exploration in Bandits with Linear Constraints

We address the problem of identifying the optimal policy with a fixed co...

Please sign up or login with your details

Forgot password? Click here to reset