Model and Algorithm for Time-Consistent Risk-Aware Markov Games
In this paper, we propose a model for non-cooperative Markov games with time-consistent risk-aware players. In particular, our model characterizes the risk arising from both the stochastic state transitions and the randomized strategies of the other players. We give an appropriate equilibrium concept for our risk-aware Markov game model and we demonstrate the existence of such equilibria in stationary strategies. We then propose and analyze a simulation-based Q-learning type algorithm for equilibrium computation, and work through the details for some specific risk measures. Our numerical experiments on a two player queuing game demonstrate the worth and applicability of our model and corresponding Q-learning algorithm.
READ FULL TEXT