Partial-Information Q-Learning for General Two-Player Stochastic Games

02/21/2023
by   Negash Medhin, et al.
0

In this article we analyze a partial-information Nash Q-learning algorithm for a general 2-player stochastic game. Partial information refers to the setting where a player does not know the strategy or the actions taken by the opposing player. We prove convergence of this partially informed algorithm for general 2-player games with finitely many states and actions, and we confirm that the limiting strategy is in fact a full-information Nash equilibrium. In implementation, partial information offers simplicity because it avoids computation of Nash equilibria at every time step. In contrast, full-information Q-learning uses the Lemke-Howson algorithm to compute Nash equilibria at every time step, which can be an effective approach but requires several assumptions to prove convergence and may have runtime error if Lemke-Howson encounters degeneracy. In simulations, the partial information results we obtain are comparable to those for full-information Q-learning and fictitious play.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro