Best-Response Dynamics and Fictitious Play in Identical Interest Stochastic Games

11/08/2021
by   Lucas Baudin, et al.
0

This paper combines ideas from Q-learning and fictitious play to define three reinforcement learning procedures which converge to the set of stationary mixed Nash equilibria in identical interest discounted stochastic games. First, we analyse three continuous-time systems that generalize the best-response dynamics defined by Leslie et al. for zero-sum discounted stochastic games. Under some assumptions depending on the system, the dynamics are shown to converge to the set of stationary equilibria in identical interest discounted stochastic games. Then, we introduce three analog discrete-time procedures in the spirit of Sayin et al. and demonstrate their convergence to the set of stationary equilibria using our results in continuous time together with stochastic approximation techniques. Some numerical experiments complement our theoretical findings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset