A Short Note on Soft-max and Policy Gradients in Bandits Problems
This is a short communication on a Lyapunov function argument for softmax in bandit problems. There are a number of excellent papers coming out using differential equations for policy gradient algorithms in reinforcement learning <cit.>. We give a short argument that gives a regret bound for the soft-max ordinary differential equation for bandit problems. We derive a similar result for a different policy gradient algorithm, again for bandit problems. For this second algorithm, it is possible to prove regret bounds in the stochastic case <cit.>. At the end, we summarize some ideas and issues on deriving stochastic regret bounds for policy gradients.
READ FULL TEXT