Multi-armed Bandits with Cost Subsidy

11/03/2020
by   Deeksha Sinha, et al.
0

In this paper, we consider a novel variant of the multi-armed bandit (MAB) problem, MAB with cost subsidy, which models many real-life applications where the learning agent has to pay to select an arm and is concerned about optimizing cumulative costs and rewards. We present two applications, intelligent SMS routing problem and ad audience optimization problem faced by a number of businesses (especially online platforms) and show how our problem uniquely captures key features of these applications. We show that naive generalizations of existing MAB algorithms like Upper Confidence Bound and Thompson Sampling do not perform well for this problem. We then establish fundamental lower bound of Ω(K^1/3 T^2/3) on the performance of any online learning algorithm for this problem, highlighting the hardness of our problem in comparison to the classical MAB problem (where T is the time horizon and K is the number of arms). We also present a simple variant of explore-then-commit and establish near-optimal regret bounds for this algorithm. Lastly, we perform extensive numerical simulations to understand the behavior of a suite of algorithms for various instances and recommend a practical guide to employ different algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2020

Lenient Regret for Multi-Armed Bandits

We consider the Multi-Armed Bandit (MAB) problem, where the agent sequen...
research
02/27/2020

Online Learning for Active Cache Synchronization

Existing multi-armed bandit (MAB) models make two implicit assumptions: ...
research
09/06/2021

Thompson Sampling for Bandits with Clustered Arms

We propose algorithms based on a multi-level Thompson sampling scheme, f...
research
08/22/2019

Online Inference for Advertising Auctions

Advertisers that engage in real-time bidding (RTB) to display their ads ...
research
09/13/2021

Machine Learning for Online Algorithm Selection under Censored Feedback

In online algorithm selection (OAS), instances of an algorithmic problem...
research
11/17/2021

Max-Min Grouped Bandits

In this paper, we introduce a multi-armed bandit problem termed max-min ...
research
06/15/2023

Optimal Best-Arm Identification in Bandits with Access to Offline Data

Learning paradigms based purely on offline data as well as those based s...

Please sign up or login with your details

Forgot password? Click here to reset