Practical Bandits: An Industry Perspective

02/02/2023
by   Bram van den Akker, et al.
0

The bandit paradigm provides a unified modeling framework for problems that require decision-making under uncertainty. Because many business metrics can be viewed as rewards (a.k.a. utilities) that result from actions, bandit algorithms have seen a large and growing interest from industrial applications, such as search, recommendation and advertising. Indeed, with the bandit lens comes the promise of direct optimisation for the metrics we care about. Nevertheless, the road to successfully applying bandits in production is not an easy one. Even when the action space and rewards are well-defined, practitioners still need to make decisions regarding multi-arm or contextual approaches, on- or off-policy setups, delayed or immediate feedback, myopic or long-term optimisation, etc. To make matters worse, industrial platforms typically give rise to large action spaces in which existing approaches tend to break down. The research literature on these topics is broad and vast, but this can overwhelm practitioners, whose primary aim is to solve practical problems, and therefore need to decide on a specific instantiation or approach for each project. This tutorial will take a step towards filling that gap between the theory and practice of bandits. Our goal is to present a unified overview of the field and its existing terminology, concepts and algorithms – with a focus on problems relevant to industry. We hope our industrial perspective will help future practitioners who wish to leverage the bandit paradigm for their application.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2021

A Map of Bandits for E-commerce

The rich body of Bandit literature not only offers a diverse toolbox of ...
research
07/12/2022

Contextual Bandits with Large Action Spaces: Made Practical

A central problem in sequential decision making is to develop algorithms...
research
09/09/2022

Extending Open Bandit Pipeline to Simulate Industry Challenges

Bandit algorithms are often used in the e-commerce industry to train Mac...
research
03/17/2021

Homomorphically Encrypted Linear Contextual Bandit

Contextual bandit is a general framework for online learning in sequenti...
research
11/15/2020

DORB: Dynamically Optimizing Multiple Rewards with Bandits

Policy gradients-based reinforcement learning has proven to be a promisi...
research
06/16/2020

Off-policy Bandits with Deficient Support

Learning effective contextual-bandit policies from past actions of a dep...
research
10/07/2020

Effects of Model Misspecification on Bayesian Bandits: Case Studies in UX Optimization

Bayesian bandits using Thompson Sampling have seen increasing success in...

Please sign up or login with your details

Forgot password? Click here to reset