Adversarial Bandits with Knapsacks

11/28/2018
by   Nicole Immorlica, et al.
0

We consider Bandits with Knapsacks (henceforth, BwK), a general model for multi-armed bandits under supply/budget constraints. In particular, a bandit algorithm needs to solve a well-known knapsack problem: find an optimal packing of items into a limited-size knapsack. The BwK problem is a common generalization of numerous motivating examples, which range from dynamic pricing to repeated auctions to dynamic ad allocation to network routing and scheduling. While the prior work on BwK focused on the stochastic version, we pioneer the other extreme in which the outcomes can be chosen adversarially. This is a considerably harder problem, compared to both the stochastic version and the "classic" adversarial bandits, in that regret minimization is no longer feasible. Instead, the objective is to minimize the competitive ratio: the ratio of the benchmark reward to the algorithm's reward. We design an algorithm with competitive ratio O(log T) relative to the best fixed distribution over actions, where T is the time horizon; we also prove a matching lower bound. The key conceptual contribution is a new perspective on the stochastic version of the problem. We suggest a new algorithm for the stochastic version, which builds on the framework of regret minimization in repeated games and admits a substantially simpler analysis compared to prior work. We then analyze this algorithm for the adversarial version and use it as a subroutine to solve the latter.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2019

Better Algorithms for Stochastic Bandits with Adversarial Corruptions

We study the stochastic multi-armed bandits problem in the presence of a...
research
02/28/2023

Approximately Stationary Bandits with Knapsacks

Bandits with Knapsacks (BwK), the generalization of the Multi-Armed Band...
research
10/27/2020

Adversarial Dueling Bandits

We introduce the problem of regret minimization in Adversarial Dueling B...
research
06/22/2020

Adaptive Discretization for Adversarial Bandits with Continuous Action Spaces

Lipschitz bandits is a prominent version of multi-armed bandits that stu...
research
10/06/2018

Learning to Optimize under Non-Stationarity

We introduce algorithms that achieve state-of-the-art dynamic regret bou...
research
10/14/2020

Online Learning with Vector Costs and Bandits with Knapsacks

We introduce online learning with vector costs () where in each time ste...
research
11/29/2021

Online Fair Revenue Maximizing Cake Division with Non-Contiguous Pieces in Adversarial Bandits

The classic cake-cutting problem provides a model for addressing the fai...

Please sign up or login with your details

Forgot password? Click here to reset