A Short Note on Soft-max and Policy Gradients in Bandits Problems

07/20/2020
by   Neil Walton, et al.
0

This is a short communication on a Lyapunov function argument for softmax in bandit problems. There are a number of excellent papers coming out using differential equations for policy gradient algorithms in reinforcement learning <cit.>. We give a short argument that gives a regret bound for the soft-max ordinary differential equation for bandit problems. We derive a similar result for a different policy gradient algorithm, again for bandit problems. For this second algorithm, it is possible to prove regret bounds in the stochastic case <cit.>. At the end, we summarize some ideas and issues on deriving stochastic regret bounds for policy gradients.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset