Multi-Armed Bandit Problem and Batch UCB Rule

02/01/2019
by   Alexander Kolnogorov, et al.
0

We obtain the upper bound of the loss function for a strategy in the multi-armed bandit problem with Gaussian distributions of incomes. Considered strategy is an asymptotic generalization of the strategy proposed by J. Bather for the multi-armed bandit problem and using UCB rule, i.e. choosing the action corresponding to the maximum of the upper bound of the confidence interval of the current estimate of the expected value of one-step income. Results are obtained with the help of invariant description of the control on the unit horizon in the domain of close distributions because just there the loss function attains its maximal values. UCB rule is widely used in machine learning. It can be also used for the batch data processing optimization if there are two alternative processing methods available with different a priori unknown efficiencies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro