Thompson Sampling Algorithms for Mean-Variance Bandits

02/01/2020
by   Qiuyu Zhu, et al.
0

The multi-armed bandit (MAB) problem is a classical learning task that exemplifies the exploration-exploitation tradeoff. However, standard formulations do not take into account risk. In online decision making systems, risk is a primary concern. In this regard, the mean-variance risk measure is one of the most common objective functions. Existing algorithms for mean-variance optimization in the context of MAB problems have unrealistic assumptions on the reward distributions. We develop Thompson Sampling-style algorithms for mean-variance MAB and provide comprehensive regret analyses for Gaussian and Bernoulli bandits with fewer assumptions. Our algorithms achieve the best known regret bounds for mean-variance MABs and also attain the information-theoretic bounds in some parameter regimes. Empirical simulations show that our algorithms significantly outperform existing LCB-based algorithms for all risk tolerances.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2020

Risk-Constrained Thompson Sampling for CVaR Bandits

The multi-armed bandit (MAB) problem is a ubiquitous decision-making pro...
research
08/25/2021

A Unifying Theory of Thompson Sampling for Continuous Risk-Averse Bandits

This paper unifies the design and simplifies the analysis of risk-averse...
research
08/04/2022

Risk-Aware Linear Bandits: Theory and Applications in Smart Order Routing

Motivated by practical considerations in machine learning for financial ...
research
02/24/2021

Continuous Mean-Covariance Bandits

Existing risk-aware multi-armed bandit models typically focus on risk me...
research
09/07/2018

A Block Coordinate Ascent Algorithm for Mean-Variance Optimization

Risk management in dynamic decision problems is a primary concern in man...
research
04/17/2019

X-Armed Bandits: Optimizing Quantiles and Other Risks

We propose and analyze StoROO, an algorithm for risk optimization on sto...
research
01/31/2023

Probably Anytime-Safe Stochastic Combinatorial Semi-Bandits

Motivated by concerns about making online decisions that incur undue amo...

Please sign up or login with your details

Forgot password? Click here to reset