Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of Multi-armed Bandits

08/11/2022
by   Bo Li, et al.
0

The multi-armed bandit (MAB) model is one of the most classical models to study decision-making in an uncertain environment. In this model, a player needs to choose one of K possible arms of a bandit machine to play at each time step, where the corresponding arm returns a random reward to the player, potentially from a specific unknown distribution. The target of the player is to collect as much rewards as possible during the process. Despite its simplicity, the MAB model offers an excellent playground for studying the trade-off between exploration versus exploitation and designing effective algorithms for sequential decision-making under uncertainty. Although many asymptotically optimal algorithms have been established, the finite-time behaviours of the stochastic dynamics of the MAB model appears much more difficult to analyze, due to the intertwining between the decision-making and the rewards being collected. In this paper, we employ techniques in statistical physics to analyze the MAB model, which facilitates to characterize the distribution of cumulative regrets at a finite short time, the central quantity of interest in an MAB algorithm, as well as the intricate dynamical behaviours of the model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2013

Modeling Human Decision-making in Generalized Gaussian Multi-armed Bandits

We present a formal model of human decision-making in explore-exploit ta...
research
12/12/2022

Decentralized Stochastic Multi-Player Multi-Armed Walking Bandits

Multi-player multi-armed bandit is an increasingly relevant decision-mak...
research
03/31/2021

Robust Experimentation in the Continuous Time Bandit Problem

We study the experimentation dynamics of a decision maker (DM) in a two-...
research
02/20/2019

Where Do Human Heuristics Come From?

Human decision-making deviates from the optimal solution, that maximizes...
research
01/12/2016

Infomax strategies for an optimal balance between exploration and exploitation

Proper balance between exploitation and exploration is what makes good d...
research
03/21/2022

Efficient Algorithms for Extreme Bandits

In this paper, we contribute to the Extreme Bandit problem, a variant of...
research
09/19/2022

Active Inference for Autonomous Decision-Making with Contextual Multi-Armed Bandits

In autonomous robotic decision-making under uncertainty, the tradeoff be...

Please sign up or login with your details

Forgot password? Click here to reset