Minimax Policy for Heavy-tailed Multi-armed Bandits

07/20/2020
by   Lai Wei, et al.
0

We study the stochastic Multi-Armed Bandit (MAB) problem under worst case regret and heavy-tailed reward distribution. We modify the minimax policy MOSS <cit.> for the sub-Gaussian reward distribution by using saturated empirical mean to design a new algorithm called Robust MOSS. We show that if the moment of order 1+ϵ for the reward distribution exists, then the refined strategy has a worst-case regret matching the lower bound while maintaining a distribution dependent logarithm regret.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro