Equilibrium Bandits: Learning Optimal Equilibria of Unknown Dynamics

02/27/2023
by   Siddharth Chandak, et al.
0

Consider a decision-maker that can pick one out of K actions to control an unknown system, for T turns. The actions are interpreted as different configurations or policies. Holding the same action fixed, the system asymptotically converges to a unique equilibrium, as a function of this action. The dynamics of the system are unknown to the decision-maker, which can only observe a noisy reward at the end of every turn. The decision-maker wants to maximize its accumulated reward over the T turns. Learning what equilibria are better results in higher rewards, but waiting for the system to converge to equilibrium costs valuable time. Existing bandit algorithms, either stochastic or adversarial, achieve linear (trivial) regret for this problem. We present a novel algorithm, termed Upper Equilibrium Concentration Bound (UECB), that knows to switch an action quickly if it is not worth it to wait until the equilibrium is reached. This is enabled by employing convergence bounds to determine how far the system is from equilibrium. We prove that UECB achieves a regret of 𝒪(log(T)+τ_clog(τ_c)+τ_cloglog(T)) for this equilibrium bandit problem where τ_c is the worst case approximate convergence time to equilibrium. We then show that both epidemic control and game control are special cases of equilibrium bandits, where τ_clogτ_c typically dominates the regret. We then test UECB numerically for both of these applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2021

No Discounted-Regret Learning in Adversarial Bandits with Delays

Consider a player that in each round t out of T rounds chooses an action...
research
02/12/2020

A General Framework to Analyze Stochastic Linear Bandit

In this paper we study the well-known stochastic linear bandit problem w...
research
09/27/2022

A Doubly Optimistic Strategy for Safe Linear Bandits

We propose a doubly optimistic strategy for the safe-linear-bandit probl...
research
10/22/2019

Restless Hidden Markov Bandits with Linear Rewards

This paper presents an algorithm and regret analysis for the restless hi...
research
10/23/2018

Unifying the stochastic and the adversarial Bandits with Knapsack

This paper investigates the adversarial Bandits with Knapsack (BwK) onli...
research
02/15/2021

Secure-UCB: Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification

This paper studies bandit algorithms under data poisoning attacks in a b...
research
09/06/2023

Episodic Logit-Q Dynamics for Efficient Learning in Stochastic Teams

We present new learning dynamics combining (independent) log-linear lear...

Please sign up or login with your details

Forgot password? Click here to reset